00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 972 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3639 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.066 using credential 00000000-0000-0000-0000-000000000002 00:00:00.068 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.085 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.111 Using shallow fetch with depth 1 00:00:00.111 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.111 > git --version # timeout=10 00:00:00.145 > git --version # 'git version 2.39.2' 00:00:00.145 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.179 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.179 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.423 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.434 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.445 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.445 > git config core.sparsecheckout # timeout=10 00:00:03.454 > git read-tree -mu HEAD # timeout=10 00:00:03.470 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.490 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.490 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.574 [Pipeline] Start of Pipeline 00:00:03.592 [Pipeline] library 00:00:03.594 Loading library shm_lib@master 00:00:03.594 Library shm_lib@master is cached. Copying from home. 00:00:03.612 [Pipeline] node 00:00:03.620 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.622 [Pipeline] { 00:00:03.632 [Pipeline] catchError 00:00:03.634 [Pipeline] { 00:00:03.644 [Pipeline] wrap 00:00:03.652 [Pipeline] { 00:00:03.660 [Pipeline] stage 00:00:03.662 [Pipeline] { (Prologue) 00:00:03.865 [Pipeline] sh 00:00:04.146 + logger -p user.info -t JENKINS-CI 00:00:04.161 [Pipeline] echo 00:00:04.163 Node: GP11 00:00:04.170 [Pipeline] sh 00:00:04.467 [Pipeline] setCustomBuildProperty 00:00:04.477 [Pipeline] echo 00:00:04.479 Cleanup processes 00:00:04.485 [Pipeline] sh 00:00:04.773 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.773 986870 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.785 [Pipeline] sh 00:00:05.069 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.069 ++ grep -v 'sudo pgrep' 00:00:05.069 ++ awk '{print $1}' 00:00:05.069 + sudo kill -9 00:00:05.069 + true 00:00:05.081 [Pipeline] cleanWs 00:00:05.089 [WS-CLEANUP] Deleting project workspace... 00:00:05.089 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.096 [WS-CLEANUP] done 00:00:05.100 [Pipeline] setCustomBuildProperty 00:00:05.114 [Pipeline] sh 00:00:05.401 + sudo git config --global --replace-all safe.directory '*' 00:00:05.495 [Pipeline] httpRequest 00:00:05.848 [Pipeline] echo 00:00:05.850 Sorcerer 10.211.164.20 is alive 00:00:05.856 [Pipeline] retry 00:00:05.857 [Pipeline] { 00:00:05.865 [Pipeline] httpRequest 00:00:05.869 HttpMethod: GET 00:00:05.870 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.871 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.874 Response Code: HTTP/1.1 200 OK 00:00:05.874 Success: Status code 200 is in the accepted range: 200,404 00:00:05.875 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.762 [Pipeline] } 00:00:06.773 [Pipeline] // retry 00:00:06.779 [Pipeline] sh 00:00:07.068 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.082 [Pipeline] httpRequest 00:00:07.706 [Pipeline] echo 00:00:07.708 Sorcerer 10.211.164.20 is alive 00:00:07.718 [Pipeline] retry 00:00:07.721 [Pipeline] { 00:00:07.738 [Pipeline] httpRequest 00:00:07.744 HttpMethod: GET 00:00:07.744 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.746 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.749 Response Code: HTTP/1.1 200 OK 00:00:07.749 Success: Status code 200 is in the accepted range: 200,404 00:00:07.750 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:31.295 [Pipeline] } 00:00:31.312 [Pipeline] // retry 00:00:31.319 [Pipeline] sh 00:00:31.609 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:34.928 [Pipeline] sh 00:00:35.219 + git -C spdk log --oneline -n5 00:00:35.219 c13c99a5e test: Various fixes for Fedora40 00:00:35.219 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:35.219 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:35.219 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:35.219 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:35.240 [Pipeline] withCredentials 00:00:35.253 > git --version # timeout=10 00:00:35.266 > git --version # 'git version 2.39.2' 00:00:35.288 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:35.291 [Pipeline] { 00:00:35.300 [Pipeline] retry 00:00:35.302 [Pipeline] { 00:00:35.318 [Pipeline] sh 00:00:35.606 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:35.882 [Pipeline] } 00:00:35.900 [Pipeline] // retry 00:00:35.906 [Pipeline] } 00:00:35.922 [Pipeline] // withCredentials 00:00:35.933 [Pipeline] httpRequest 00:00:36.397 [Pipeline] echo 00:00:36.399 Sorcerer 10.211.164.20 is alive 00:00:36.409 [Pipeline] retry 00:00:36.412 [Pipeline] { 00:00:36.426 [Pipeline] httpRequest 00:00:36.431 HttpMethod: GET 00:00:36.432 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:36.433 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:36.444 Response Code: HTTP/1.1 200 OK 00:00:36.444 Success: Status code 200 is in the accepted range: 200,404 00:00:36.444 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:50.428 [Pipeline] } 00:00:50.445 [Pipeline] // retry 00:00:50.454 [Pipeline] sh 00:00:50.746 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:52.684 [Pipeline] sh 00:00:52.980 + git -C dpdk log --oneline -n5 00:00:52.980 caf0f5d395 version: 22.11.4 00:00:52.980 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:52.980 dc9c799c7d vhost: fix missing spinlock unlock 00:00:52.980 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:52.980 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:52.989 [Pipeline] } 00:00:52.998 [Pipeline] // stage 00:00:53.003 [Pipeline] stage 00:00:53.004 [Pipeline] { (Prepare) 00:00:53.015 [Pipeline] writeFile 00:00:53.024 [Pipeline] sh 00:00:53.303 + logger -p user.info -t JENKINS-CI 00:00:53.315 [Pipeline] sh 00:00:53.601 + logger -p user.info -t JENKINS-CI 00:00:53.614 [Pipeline] sh 00:00:53.926 + cat autorun-spdk.conf 00:00:53.926 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.926 SPDK_TEST_NVMF=1 00:00:53.926 SPDK_TEST_NVME_CLI=1 00:00:53.926 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.926 SPDK_TEST_NVMF_NICS=e810 00:00:53.926 SPDK_TEST_VFIOUSER=1 00:00:53.926 SPDK_RUN_UBSAN=1 00:00:53.926 NET_TYPE=phy 00:00:53.926 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:53.926 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:53.935 RUN_NIGHTLY=1 00:00:53.940 [Pipeline] readFile 00:00:53.966 [Pipeline] withEnv 00:00:53.968 [Pipeline] { 00:00:53.980 [Pipeline] sh 00:00:54.268 + set -ex 00:00:54.269 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:54.269 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:54.269 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.269 ++ SPDK_TEST_NVMF=1 00:00:54.269 ++ SPDK_TEST_NVME_CLI=1 00:00:54.269 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:54.269 ++ SPDK_TEST_NVMF_NICS=e810 00:00:54.269 ++ SPDK_TEST_VFIOUSER=1 00:00:54.269 ++ SPDK_RUN_UBSAN=1 00:00:54.269 ++ NET_TYPE=phy 00:00:54.269 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:54.269 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:54.269 ++ RUN_NIGHTLY=1 00:00:54.269 + case $SPDK_TEST_NVMF_NICS in 00:00:54.269 + DRIVERS=ice 00:00:54.269 + [[ tcp == \r\d\m\a ]] 00:00:54.269 + [[ -n ice ]] 00:00:54.269 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:54.269 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:54.269 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:54.269 rmmod: ERROR: Module irdma is not currently loaded 00:00:54.269 rmmod: ERROR: Module i40iw is not currently loaded 00:00:54.269 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:54.269 + true 00:00:54.269 + for D in $DRIVERS 00:00:54.269 + sudo modprobe ice 00:00:54.269 + exit 0 00:00:54.279 [Pipeline] } 00:00:54.294 [Pipeline] // withEnv 00:00:54.299 [Pipeline] } 00:00:54.313 [Pipeline] // stage 00:00:54.323 [Pipeline] catchError 00:00:54.325 [Pipeline] { 00:00:54.339 [Pipeline] timeout 00:00:54.339 Timeout set to expire in 1 hr 0 min 00:00:54.341 [Pipeline] { 00:00:54.355 [Pipeline] stage 00:00:54.357 [Pipeline] { (Tests) 00:00:54.372 [Pipeline] sh 00:00:54.658 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.658 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.658 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.659 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:54.659 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:54.659 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:54.659 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:54.659 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:54.659 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:54.659 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:54.659 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:54.659 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.659 + source /etc/os-release 00:00:54.659 ++ NAME='Fedora Linux' 00:00:54.659 ++ VERSION='39 (Cloud Edition)' 00:00:54.659 ++ ID=fedora 00:00:54.659 ++ VERSION_ID=39 00:00:54.659 ++ VERSION_CODENAME= 00:00:54.659 ++ PLATFORM_ID=platform:f39 00:00:54.659 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:54.659 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:54.659 ++ LOGO=fedora-logo-icon 00:00:54.659 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:54.659 ++ HOME_URL=https://fedoraproject.org/ 00:00:54.659 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:54.659 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:54.659 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:54.659 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:54.659 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:54.659 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:54.659 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:54.659 ++ SUPPORT_END=2024-11-12 00:00:54.659 ++ VARIANT='Cloud Edition' 00:00:54.659 ++ VARIANT_ID=cloud 00:00:54.659 + uname -a 00:00:54.659 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:54.659 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:55.596 Hugepages 00:00:55.596 node hugesize free / total 00:00:55.596 node0 1048576kB 0 / 0 00:00:55.596 node0 2048kB 0 / 0 00:00:55.596 node1 1048576kB 0 / 0 00:00:55.596 node1 2048kB 0 / 0 00:00:55.596 00:00:55.596 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:55.596 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:55.596 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:55.596 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:55.596 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:55.596 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:55.596 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:55.596 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:55.596 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:55.596 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:55.596 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:55.596 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:55.596 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:55.596 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:55.596 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:55.596 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:55.596 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:55.855 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:55.855 + rm -f /tmp/spdk-ld-path 00:00:55.855 + source autorun-spdk.conf 00:00:55.855 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.855 ++ SPDK_TEST_NVMF=1 00:00:55.855 ++ SPDK_TEST_NVME_CLI=1 00:00:55.855 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.855 ++ SPDK_TEST_NVMF_NICS=e810 00:00:55.855 ++ SPDK_TEST_VFIOUSER=1 00:00:55.855 ++ SPDK_RUN_UBSAN=1 00:00:55.855 ++ NET_TYPE=phy 00:00:55.855 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:55.855 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.855 ++ RUN_NIGHTLY=1 00:00:55.855 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:55.855 + [[ -n '' ]] 00:00:55.855 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.855 + for M in /var/spdk/build-*-manifest.txt 00:00:55.855 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:55.855 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.855 + for M in /var/spdk/build-*-manifest.txt 00:00:55.855 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:55.855 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.855 + for M in /var/spdk/build-*-manifest.txt 00:00:55.855 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:55.855 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.855 ++ uname 00:00:55.855 + [[ Linux == \L\i\n\u\x ]] 00:00:55.855 + sudo dmesg -T 00:00:55.855 + sudo dmesg --clear 00:00:55.855 + dmesg_pid=988088 00:00:55.855 + [[ Fedora Linux == FreeBSD ]] 00:00:55.855 + sudo dmesg -Tw 00:00:55.855 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.855 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.855 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:55.855 + [[ -x /usr/src/fio-static/fio ]] 00:00:55.855 + export FIO_BIN=/usr/src/fio-static/fio 00:00:55.855 + FIO_BIN=/usr/src/fio-static/fio 00:00:55.855 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:55.855 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:55.855 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:55.855 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.855 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.855 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:55.855 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.855 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.855 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:55.855 Test configuration: 00:00:55.855 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.855 SPDK_TEST_NVMF=1 00:00:55.855 SPDK_TEST_NVME_CLI=1 00:00:55.855 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.855 SPDK_TEST_NVMF_NICS=e810 00:00:55.855 SPDK_TEST_VFIOUSER=1 00:00:55.855 SPDK_RUN_UBSAN=1 00:00:55.855 NET_TYPE=phy 00:00:55.855 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:55.855 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.855 RUN_NIGHTLY=1 19:09:54 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:00:55.855 19:09:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:55.855 19:09:54 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:55.855 19:09:54 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:55.855 19:09:54 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:55.855 19:09:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.855 19:09:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.855 19:09:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.855 19:09:54 -- paths/export.sh@5 -- $ export PATH 00:00:55.855 19:09:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.855 19:09:54 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:55.855 19:09:54 -- common/autobuild_common.sh@440 -- $ date +%s 00:00:55.855 19:09:54 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731866994.XXXXXX 00:00:55.855 19:09:54 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731866994.iKwLQH 00:00:55.855 19:09:54 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:00:55.855 19:09:54 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:00:55.855 19:09:54 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.855 19:09:54 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:55.855 19:09:54 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:55.855 19:09:54 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:55.855 19:09:54 -- common/autobuild_common.sh@456 -- $ get_config_params 00:00:55.855 19:09:54 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:00:55.855 19:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.855 19:09:54 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:55.855 19:09:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:55.855 19:09:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:55.855 19:09:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.855 19:09:54 -- spdk/autobuild.sh@16 -- $ date -u 00:00:55.855 Sun Nov 17 06:09:54 PM UTC 2024 00:00:55.855 19:09:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:55.855 LTS-67-gc13c99a5e 00:00:55.855 19:09:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:55.855 19:09:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:55.855 19:09:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:55.855 19:09:54 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:55.855 19:09:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:55.855 19:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.855 ************************************ 00:00:55.855 START TEST ubsan 00:00:55.855 ************************************ 00:00:55.855 19:09:54 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:00:55.855 using ubsan 00:00:55.855 00:00:55.855 real 0m0.000s 00:00:55.855 user 0m0.000s 00:00:55.855 sys 0m0.000s 00:00:55.855 19:09:54 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:00:55.855 19:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.855 ************************************ 00:00:55.855 END TEST ubsan 00:00:55.855 ************************************ 00:00:55.855 19:09:54 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:00:55.855 19:09:54 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:55.855 19:09:54 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:55.855 19:09:54 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:00:55.855 19:09:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:55.855 19:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.856 ************************************ 00:00:55.856 START TEST build_native_dpdk 00:00:55.856 ************************************ 00:00:55.856 19:09:54 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:00:55.856 19:09:54 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:55.856 19:09:54 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:55.856 19:09:54 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:55.856 19:09:54 -- common/autobuild_common.sh@51 -- $ local compiler 00:00:55.856 19:09:54 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:55.856 19:09:54 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:55.856 19:09:54 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:55.856 19:09:54 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:55.856 19:09:54 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:55.856 19:09:54 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:55.856 19:09:54 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:55.856 19:09:54 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:55.856 19:09:54 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.856 19:09:54 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.856 19:09:54 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:55.856 19:09:54 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.856 19:09:54 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:55.856 caf0f5d395 version: 22.11.4 00:00:55.856 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:55.856 dc9c799c7d vhost: fix missing spinlock unlock 00:00:55.856 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:55.856 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:55.856 19:09:54 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:55.856 19:09:54 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:55.856 19:09:54 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:00:55.856 19:09:54 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:55.856 19:09:54 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:55.856 19:09:54 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:55.856 19:09:54 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:55.856 19:09:54 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:55.856 19:09:54 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:55.856 19:09:54 -- common/autobuild_common.sh@168 -- $ uname -s 00:00:55.856 19:09:54 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:55.856 19:09:54 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:00:55.856 19:09:54 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:00:55.856 19:09:54 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:00:55.856 19:09:54 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:00:55.856 19:09:54 -- scripts/common.sh@335 -- $ IFS=.-: 00:00:55.856 19:09:54 -- scripts/common.sh@335 -- $ read -ra ver1 00:00:55.856 19:09:54 -- scripts/common.sh@336 -- $ IFS=.-: 00:00:55.856 19:09:54 -- scripts/common.sh@336 -- $ read -ra ver2 00:00:55.856 19:09:54 -- scripts/common.sh@337 -- $ local 'op=<' 00:00:55.856 19:09:54 -- scripts/common.sh@339 -- $ ver1_l=3 00:00:55.856 19:09:54 -- scripts/common.sh@340 -- $ ver2_l=3 00:00:55.856 19:09:54 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:00:55.856 19:09:54 -- scripts/common.sh@343 -- $ case "$op" in 00:00:55.856 19:09:54 -- scripts/common.sh@344 -- $ : 1 00:00:55.856 19:09:54 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:00:55.856 19:09:54 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:55.856 19:09:54 -- scripts/common.sh@364 -- $ decimal 22 00:00:55.856 19:09:54 -- scripts/common.sh@352 -- $ local d=22 00:00:55.856 19:09:54 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:55.856 19:09:54 -- scripts/common.sh@354 -- $ echo 22 00:00:55.856 19:09:54 -- scripts/common.sh@364 -- $ ver1[v]=22 00:00:55.856 19:09:54 -- scripts/common.sh@365 -- $ decimal 21 00:00:55.856 19:09:54 -- scripts/common.sh@352 -- $ local d=21 00:00:55.856 19:09:54 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:55.856 19:09:54 -- scripts/common.sh@354 -- $ echo 21 00:00:55.856 19:09:54 -- scripts/common.sh@365 -- $ ver2[v]=21 00:00:55.856 19:09:54 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:00:55.856 19:09:54 -- scripts/common.sh@366 -- $ return 1 00:00:55.856 19:09:54 -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:55.856 patching file config/rte_config.h 00:00:55.856 Hunk #1 succeeded at 60 (offset 1 line). 00:00:55.856 19:09:54 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:00:55.856 19:09:54 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:00:55.856 19:09:54 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:00:55.856 19:09:54 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:00:55.856 19:09:54 -- scripts/common.sh@335 -- $ IFS=.-: 00:00:55.856 19:09:54 -- scripts/common.sh@335 -- $ read -ra ver1 00:00:55.856 19:09:54 -- scripts/common.sh@336 -- $ IFS=.-: 00:00:55.856 19:09:54 -- scripts/common.sh@336 -- $ read -ra ver2 00:00:55.856 19:09:54 -- scripts/common.sh@337 -- $ local 'op=<' 00:00:55.856 19:09:54 -- scripts/common.sh@339 -- $ ver1_l=3 00:00:55.856 19:09:54 -- scripts/common.sh@340 -- $ ver2_l=3 00:00:55.856 19:09:54 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:00:55.856 19:09:54 -- scripts/common.sh@343 -- $ case "$op" in 00:00:55.856 19:09:54 -- scripts/common.sh@344 -- $ : 1 00:00:55.856 19:09:54 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:00:55.856 19:09:54 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:55.856 19:09:54 -- scripts/common.sh@364 -- $ decimal 22 00:00:55.856 19:09:54 -- scripts/common.sh@352 -- $ local d=22 00:00:55.856 19:09:54 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:55.856 19:09:54 -- scripts/common.sh@354 -- $ echo 22 00:00:55.856 19:09:54 -- scripts/common.sh@364 -- $ ver1[v]=22 00:00:55.856 19:09:54 -- scripts/common.sh@365 -- $ decimal 24 00:00:55.856 19:09:54 -- scripts/common.sh@352 -- $ local d=24 00:00:55.856 19:09:54 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:55.856 19:09:54 -- scripts/common.sh@354 -- $ echo 24 00:00:55.856 19:09:54 -- scripts/common.sh@365 -- $ ver2[v]=24 00:00:55.856 19:09:54 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:00:55.856 19:09:54 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:00:55.856 19:09:54 -- scripts/common.sh@367 -- $ return 0 00:00:55.856 19:09:54 -- common/autobuild_common.sh@177 -- $ patch -p1 00:00:55.856 patching file lib/pcapng/rte_pcapng.c 00:00:55.856 Hunk #1 succeeded at 110 (offset -18 lines). 00:00:55.856 19:09:54 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:00:55.856 19:09:54 -- common/autobuild_common.sh@181 -- $ uname -s 00:00:55.856 19:09:54 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:00:55.856 19:09:54 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:55.856 19:09:54 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:01.174 The Meson build system 00:01:01.174 Version: 1.5.0 00:01:01.174 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:01.174 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:01.174 Build type: native build 00:01:01.174 Program cat found: YES (/usr/bin/cat) 00:01:01.174 Project name: DPDK 00:01:01.174 Project version: 22.11.4 00:01:01.174 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:01.174 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:01.174 Host machine cpu family: x86_64 00:01:01.174 Host machine cpu: x86_64 00:01:01.174 Message: ## Building in Developer Mode ## 00:01:01.174 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:01.175 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:01.175 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:01.175 Program objdump found: YES (/usr/bin/objdump) 00:01:01.175 Program python3 found: YES (/usr/bin/python3) 00:01:01.175 Program cat found: YES (/usr/bin/cat) 00:01:01.175 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:01.175 Checking for size of "void *" : 8 00:01:01.175 Checking for size of "void *" : 8 (cached) 00:01:01.175 Library m found: YES 00:01:01.175 Library numa found: YES 00:01:01.175 Has header "numaif.h" : YES 00:01:01.175 Library fdt found: NO 00:01:01.175 Library execinfo found: NO 00:01:01.175 Has header "execinfo.h" : YES 00:01:01.175 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:01.175 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:01.175 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:01.175 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:01.175 Run-time dependency openssl found: YES 3.1.1 00:01:01.175 Run-time dependency libpcap found: YES 1.10.4 00:01:01.175 Has header "pcap.h" with dependency libpcap: YES 00:01:01.175 Compiler for C supports arguments -Wcast-qual: YES 00:01:01.175 Compiler for C supports arguments -Wdeprecated: YES 00:01:01.175 Compiler for C supports arguments -Wformat: YES 00:01:01.175 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:01.175 Compiler for C supports arguments -Wformat-security: NO 00:01:01.175 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:01.175 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:01.175 Compiler for C supports arguments -Wnested-externs: YES 00:01:01.175 Compiler for C supports arguments -Wold-style-definition: YES 00:01:01.175 Compiler for C supports arguments -Wpointer-arith: YES 00:01:01.175 Compiler for C supports arguments -Wsign-compare: YES 00:01:01.175 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:01.175 Compiler for C supports arguments -Wundef: YES 00:01:01.175 Compiler for C supports arguments -Wwrite-strings: YES 00:01:01.175 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:01.175 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:01.175 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:01.176 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:01.176 Compiler for C supports arguments -mavx512f: YES 00:01:01.176 Checking if "AVX512 checking" compiles: YES 00:01:01.176 Fetching value of define "__SSE4_2__" : 1 00:01:01.176 Fetching value of define "__AES__" : 1 00:01:01.176 Fetching value of define "__AVX__" : 1 00:01:01.176 Fetching value of define "__AVX2__" : (undefined) 00:01:01.176 Fetching value of define "__AVX512BW__" : (undefined) 00:01:01.176 Fetching value of define "__AVX512CD__" : (undefined) 00:01:01.176 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:01.176 Fetching value of define "__AVX512F__" : (undefined) 00:01:01.176 Fetching value of define "__AVX512VL__" : (undefined) 00:01:01.176 Fetching value of define "__PCLMUL__" : 1 00:01:01.176 Fetching value of define "__RDRND__" : 1 00:01:01.176 Fetching value of define "__RDSEED__" : (undefined) 00:01:01.176 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:01.176 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:01.176 Message: lib/kvargs: Defining dependency "kvargs" 00:01:01.176 Message: lib/telemetry: Defining dependency "telemetry" 00:01:01.176 Checking for function "getentropy" : YES 00:01:01.176 Message: lib/eal: Defining dependency "eal" 00:01:01.176 Message: lib/ring: Defining dependency "ring" 00:01:01.176 Message: lib/rcu: Defining dependency "rcu" 00:01:01.176 Message: lib/mempool: Defining dependency "mempool" 00:01:01.176 Message: lib/mbuf: Defining dependency "mbuf" 00:01:01.176 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:01.176 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:01.176 Compiler for C supports arguments -mpclmul: YES 00:01:01.176 Compiler for C supports arguments -maes: YES 00:01:01.176 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:01.176 Compiler for C supports arguments -mavx512bw: YES 00:01:01.176 Compiler for C supports arguments -mavx512dq: YES 00:01:01.176 Compiler for C supports arguments -mavx512vl: YES 00:01:01.176 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:01.176 Compiler for C supports arguments -mavx2: YES 00:01:01.176 Compiler for C supports arguments -mavx: YES 00:01:01.176 Message: lib/net: Defining dependency "net" 00:01:01.176 Message: lib/meter: Defining dependency "meter" 00:01:01.176 Message: lib/ethdev: Defining dependency "ethdev" 00:01:01.176 Message: lib/pci: Defining dependency "pci" 00:01:01.176 Message: lib/cmdline: Defining dependency "cmdline" 00:01:01.176 Message: lib/metrics: Defining dependency "metrics" 00:01:01.176 Message: lib/hash: Defining dependency "hash" 00:01:01.176 Message: lib/timer: Defining dependency "timer" 00:01:01.176 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:01.176 Compiler for C supports arguments -mavx2: YES (cached) 00:01:01.176 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:01.176 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:01.176 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:01.176 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:01.176 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:01.176 Message: lib/acl: Defining dependency "acl" 00:01:01.177 Message: lib/bbdev: Defining dependency "bbdev" 00:01:01.177 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:01.177 Run-time dependency libelf found: YES 0.191 00:01:01.177 Message: lib/bpf: Defining dependency "bpf" 00:01:01.177 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:01.177 Message: lib/compressdev: Defining dependency "compressdev" 00:01:01.177 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:01.177 Message: lib/distributor: Defining dependency "distributor" 00:01:01.177 Message: lib/efd: Defining dependency "efd" 00:01:01.177 Message: lib/eventdev: Defining dependency "eventdev" 00:01:01.177 Message: lib/gpudev: Defining dependency "gpudev" 00:01:01.177 Message: lib/gro: Defining dependency "gro" 00:01:01.177 Message: lib/gso: Defining dependency "gso" 00:01:01.177 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:01.177 Message: lib/jobstats: Defining dependency "jobstats" 00:01:01.177 Message: lib/latencystats: Defining dependency "latencystats" 00:01:01.177 Message: lib/lpm: Defining dependency "lpm" 00:01:01.177 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:01.177 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:01.177 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:01.177 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:01.177 Message: lib/member: Defining dependency "member" 00:01:01.177 Message: lib/pcapng: Defining dependency "pcapng" 00:01:01.177 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:01.177 Message: lib/power: Defining dependency "power" 00:01:01.177 Message: lib/rawdev: Defining dependency "rawdev" 00:01:01.177 Message: lib/regexdev: Defining dependency "regexdev" 00:01:01.177 Message: lib/dmadev: Defining dependency "dmadev" 00:01:01.177 Message: lib/rib: Defining dependency "rib" 00:01:01.177 Message: lib/reorder: Defining dependency "reorder" 00:01:01.177 Message: lib/sched: Defining dependency "sched" 00:01:01.178 Message: lib/security: Defining dependency "security" 00:01:01.178 Message: lib/stack: Defining dependency "stack" 00:01:01.178 Has header "linux/userfaultfd.h" : YES 00:01:01.178 Message: lib/vhost: Defining dependency "vhost" 00:01:01.178 Message: lib/ipsec: Defining dependency "ipsec" 00:01:01.178 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:01.178 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:01.178 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:01.178 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:01.178 Message: lib/fib: Defining dependency "fib" 00:01:01.178 Message: lib/port: Defining dependency "port" 00:01:01.178 Message: lib/pdump: Defining dependency "pdump" 00:01:01.178 Message: lib/table: Defining dependency "table" 00:01:01.178 Message: lib/pipeline: Defining dependency "pipeline" 00:01:01.178 Message: lib/graph: Defining dependency "graph" 00:01:01.178 Message: lib/node: Defining dependency "node" 00:01:01.178 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:01.178 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:01.178 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:01.179 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:01.179 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:01.179 Compiler for C supports arguments -Wno-unused-value: YES 00:01:02.121 Compiler for C supports arguments -Wno-format: YES 00:01:02.121 Compiler for C supports arguments -Wno-format-security: YES 00:01:02.121 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:02.121 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:02.121 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:02.121 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:02.121 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:02.121 Compiler for C supports arguments -mavx2: YES (cached) 00:01:02.121 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:02.121 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:02.121 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:02.121 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:02.121 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:02.121 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:02.121 Configuring doxy-api.conf using configuration 00:01:02.121 Program sphinx-build found: NO 00:01:02.121 Configuring rte_build_config.h using configuration 00:01:02.121 Message: 00:01:02.121 ================= 00:01:02.121 Applications Enabled 00:01:02.121 ================= 00:01:02.121 00:01:02.121 apps: 00:01:02.121 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:02.121 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:02.121 test-security-perf, 00:01:02.121 00:01:02.121 Message: 00:01:02.121 ================= 00:01:02.121 Libraries Enabled 00:01:02.121 ================= 00:01:02.121 00:01:02.121 libs: 00:01:02.121 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:02.121 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:02.121 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:02.121 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:02.121 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:02.121 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:02.121 table, pipeline, graph, node, 00:01:02.121 00:01:02.121 Message: 00:01:02.121 =============== 00:01:02.121 Drivers Enabled 00:01:02.121 =============== 00:01:02.121 00:01:02.121 common: 00:01:02.121 00:01:02.121 bus: 00:01:02.121 pci, vdev, 00:01:02.121 mempool: 00:01:02.121 ring, 00:01:02.121 dma: 00:01:02.121 00:01:02.121 net: 00:01:02.121 i40e, 00:01:02.121 raw: 00:01:02.121 00:01:02.121 crypto: 00:01:02.121 00:01:02.121 compress: 00:01:02.121 00:01:02.121 regex: 00:01:02.121 00:01:02.121 vdpa: 00:01:02.121 00:01:02.121 event: 00:01:02.121 00:01:02.121 baseband: 00:01:02.121 00:01:02.121 gpu: 00:01:02.121 00:01:02.121 00:01:02.121 Message: 00:01:02.121 ================= 00:01:02.121 Content Skipped 00:01:02.121 ================= 00:01:02.121 00:01:02.121 apps: 00:01:02.121 00:01:02.121 libs: 00:01:02.121 kni: explicitly disabled via build config (deprecated lib) 00:01:02.121 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:02.121 00:01:02.121 drivers: 00:01:02.121 common/cpt: not in enabled drivers build config 00:01:02.121 common/dpaax: not in enabled drivers build config 00:01:02.121 common/iavf: not in enabled drivers build config 00:01:02.121 common/idpf: not in enabled drivers build config 00:01:02.121 common/mvep: not in enabled drivers build config 00:01:02.121 common/octeontx: not in enabled drivers build config 00:01:02.121 bus/auxiliary: not in enabled drivers build config 00:01:02.121 bus/dpaa: not in enabled drivers build config 00:01:02.121 bus/fslmc: not in enabled drivers build config 00:01:02.121 bus/ifpga: not in enabled drivers build config 00:01:02.121 bus/vmbus: not in enabled drivers build config 00:01:02.121 common/cnxk: not in enabled drivers build config 00:01:02.121 common/mlx5: not in enabled drivers build config 00:01:02.121 common/qat: not in enabled drivers build config 00:01:02.121 common/sfc_efx: not in enabled drivers build config 00:01:02.121 mempool/bucket: not in enabled drivers build config 00:01:02.121 mempool/cnxk: not in enabled drivers build config 00:01:02.121 mempool/dpaa: not in enabled drivers build config 00:01:02.121 mempool/dpaa2: not in enabled drivers build config 00:01:02.121 mempool/octeontx: not in enabled drivers build config 00:01:02.121 mempool/stack: not in enabled drivers build config 00:01:02.121 dma/cnxk: not in enabled drivers build config 00:01:02.121 dma/dpaa: not in enabled drivers build config 00:01:02.121 dma/dpaa2: not in enabled drivers build config 00:01:02.121 dma/hisilicon: not in enabled drivers build config 00:01:02.121 dma/idxd: not in enabled drivers build config 00:01:02.121 dma/ioat: not in enabled drivers build config 00:01:02.121 dma/skeleton: not in enabled drivers build config 00:01:02.121 net/af_packet: not in enabled drivers build config 00:01:02.121 net/af_xdp: not in enabled drivers build config 00:01:02.121 net/ark: not in enabled drivers build config 00:01:02.121 net/atlantic: not in enabled drivers build config 00:01:02.121 net/avp: not in enabled drivers build config 00:01:02.121 net/axgbe: not in enabled drivers build config 00:01:02.121 net/bnx2x: not in enabled drivers build config 00:01:02.121 net/bnxt: not in enabled drivers build config 00:01:02.121 net/bonding: not in enabled drivers build config 00:01:02.121 net/cnxk: not in enabled drivers build config 00:01:02.121 net/cxgbe: not in enabled drivers build config 00:01:02.121 net/dpaa: not in enabled drivers build config 00:01:02.121 net/dpaa2: not in enabled drivers build config 00:01:02.121 net/e1000: not in enabled drivers build config 00:01:02.121 net/ena: not in enabled drivers build config 00:01:02.121 net/enetc: not in enabled drivers build config 00:01:02.121 net/enetfec: not in enabled drivers build config 00:01:02.121 net/enic: not in enabled drivers build config 00:01:02.121 net/failsafe: not in enabled drivers build config 00:01:02.121 net/fm10k: not in enabled drivers build config 00:01:02.121 net/gve: not in enabled drivers build config 00:01:02.121 net/hinic: not in enabled drivers build config 00:01:02.121 net/hns3: not in enabled drivers build config 00:01:02.121 net/iavf: not in enabled drivers build config 00:01:02.121 net/ice: not in enabled drivers build config 00:01:02.121 net/idpf: not in enabled drivers build config 00:01:02.121 net/igc: not in enabled drivers build config 00:01:02.121 net/ionic: not in enabled drivers build config 00:01:02.121 net/ipn3ke: not in enabled drivers build config 00:01:02.121 net/ixgbe: not in enabled drivers build config 00:01:02.121 net/kni: not in enabled drivers build config 00:01:02.121 net/liquidio: not in enabled drivers build config 00:01:02.121 net/mana: not in enabled drivers build config 00:01:02.121 net/memif: not in enabled drivers build config 00:01:02.121 net/mlx4: not in enabled drivers build config 00:01:02.121 net/mlx5: not in enabled drivers build config 00:01:02.121 net/mvneta: not in enabled drivers build config 00:01:02.121 net/mvpp2: not in enabled drivers build config 00:01:02.121 net/netvsc: not in enabled drivers build config 00:01:02.121 net/nfb: not in enabled drivers build config 00:01:02.121 net/nfp: not in enabled drivers build config 00:01:02.121 net/ngbe: not in enabled drivers build config 00:01:02.121 net/null: not in enabled drivers build config 00:01:02.121 net/octeontx: not in enabled drivers build config 00:01:02.121 net/octeon_ep: not in enabled drivers build config 00:01:02.121 net/pcap: not in enabled drivers build config 00:01:02.121 net/pfe: not in enabled drivers build config 00:01:02.121 net/qede: not in enabled drivers build config 00:01:02.121 net/ring: not in enabled drivers build config 00:01:02.121 net/sfc: not in enabled drivers build config 00:01:02.121 net/softnic: not in enabled drivers build config 00:01:02.121 net/tap: not in enabled drivers build config 00:01:02.122 net/thunderx: not in enabled drivers build config 00:01:02.122 net/txgbe: not in enabled drivers build config 00:01:02.122 net/vdev_netvsc: not in enabled drivers build config 00:01:02.122 net/vhost: not in enabled drivers build config 00:01:02.122 net/virtio: not in enabled drivers build config 00:01:02.122 net/vmxnet3: not in enabled drivers build config 00:01:02.122 raw/cnxk_bphy: not in enabled drivers build config 00:01:02.122 raw/cnxk_gpio: not in enabled drivers build config 00:01:02.122 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:02.122 raw/ifpga: not in enabled drivers build config 00:01:02.122 raw/ntb: not in enabled drivers build config 00:01:02.122 raw/skeleton: not in enabled drivers build config 00:01:02.122 crypto/armv8: not in enabled drivers build config 00:01:02.122 crypto/bcmfs: not in enabled drivers build config 00:01:02.122 crypto/caam_jr: not in enabled drivers build config 00:01:02.122 crypto/ccp: not in enabled drivers build config 00:01:02.122 crypto/cnxk: not in enabled drivers build config 00:01:02.122 crypto/dpaa_sec: not in enabled drivers build config 00:01:02.122 crypto/dpaa2_sec: not in enabled drivers build config 00:01:02.122 crypto/ipsec_mb: not in enabled drivers build config 00:01:02.122 crypto/mlx5: not in enabled drivers build config 00:01:02.122 crypto/mvsam: not in enabled drivers build config 00:01:02.122 crypto/nitrox: not in enabled drivers build config 00:01:02.122 crypto/null: not in enabled drivers build config 00:01:02.122 crypto/octeontx: not in enabled drivers build config 00:01:02.122 crypto/openssl: not in enabled drivers build config 00:01:02.122 crypto/scheduler: not in enabled drivers build config 00:01:02.122 crypto/uadk: not in enabled drivers build config 00:01:02.122 crypto/virtio: not in enabled drivers build config 00:01:02.122 compress/isal: not in enabled drivers build config 00:01:02.122 compress/mlx5: not in enabled drivers build config 00:01:02.122 compress/octeontx: not in enabled drivers build config 00:01:02.122 compress/zlib: not in enabled drivers build config 00:01:02.122 regex/mlx5: not in enabled drivers build config 00:01:02.122 regex/cn9k: not in enabled drivers build config 00:01:02.122 vdpa/ifc: not in enabled drivers build config 00:01:02.122 vdpa/mlx5: not in enabled drivers build config 00:01:02.122 vdpa/sfc: not in enabled drivers build config 00:01:02.122 event/cnxk: not in enabled drivers build config 00:01:02.122 event/dlb2: not in enabled drivers build config 00:01:02.122 event/dpaa: not in enabled drivers build config 00:01:02.122 event/dpaa2: not in enabled drivers build config 00:01:02.122 event/dsw: not in enabled drivers build config 00:01:02.122 event/opdl: not in enabled drivers build config 00:01:02.122 event/skeleton: not in enabled drivers build config 00:01:02.122 event/sw: not in enabled drivers build config 00:01:02.122 event/octeontx: not in enabled drivers build config 00:01:02.122 baseband/acc: not in enabled drivers build config 00:01:02.122 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:02.122 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:02.122 baseband/la12xx: not in enabled drivers build config 00:01:02.122 baseband/null: not in enabled drivers build config 00:01:02.122 baseband/turbo_sw: not in enabled drivers build config 00:01:02.122 gpu/cuda: not in enabled drivers build config 00:01:02.122 00:01:02.122 00:01:02.122 Build targets in project: 316 00:01:02.122 00:01:02.122 DPDK 22.11.4 00:01:02.122 00:01:02.122 User defined options 00:01:02.122 libdir : lib 00:01:02.122 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:02.122 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:02.122 c_link_args : 00:01:02.122 enable_docs : false 00:01:02.122 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:02.122 enable_kmods : false 00:01:02.122 machine : native 00:01:02.122 tests : false 00:01:02.122 00:01:02.122 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:02.122 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:02.122 19:10:00 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:02.122 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:02.386 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:02.386 [2/745] Generating lib/rte_kvargs_def with a custom command 00:01:02.386 [3/745] Generating lib/rte_telemetry_def with a custom command 00:01:02.386 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:02.386 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:02.386 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:02.386 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:02.386 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:02.386 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:02.386 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:02.386 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:02.386 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:02.386 [13/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:02.386 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:02.386 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:02.386 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:02.386 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:02.386 [18/745] Linking static target lib/librte_kvargs.a 00:01:02.386 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:02.386 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:02.386 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:02.386 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:02.650 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:02.650 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:02.650 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:02.650 [26/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:02.650 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:02.650 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:02.650 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:02.650 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:02.650 [31/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:02.650 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:02.650 [33/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:02.650 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:02.650 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:02.650 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:02.650 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:02.650 [38/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:02.650 [39/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:02.650 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:02.650 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:02.650 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:02.650 [43/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:02.650 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:02.650 [45/745] Generating lib/rte_eal_def with a custom command 00:01:02.650 [46/745] Generating lib/rte_ring_mingw with a custom command 00:01:02.650 [47/745] Generating lib/rte_ring_def with a custom command 00:01:02.650 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:02.650 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:02.650 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:02.650 [51/745] Generating lib/rte_rcu_def with a custom command 00:01:02.650 [52/745] Generating lib/rte_rcu_mingw with a custom command 00:01:02.650 [53/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:02.650 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:02.650 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:02.650 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:02.650 [57/745] Generating lib/rte_mempool_def with a custom command 00:01:02.650 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:01:02.650 [59/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:02.650 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:02.650 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:02.650 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:02.650 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:02.650 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:02.650 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:02.650 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:02.650 [67/745] Generating lib/rte_net_mingw with a custom command 00:01:02.650 [68/745] Generating lib/rte_meter_def with a custom command 00:01:02.650 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:02.650 [70/745] Generating lib/rte_net_def with a custom command 00:01:02.650 [71/745] Generating lib/rte_meter_mingw with a custom command 00:01:02.650 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:02.918 [73/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:02.918 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:02.918 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:02.918 [76/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:02.918 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:02.918 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:02.918 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.918 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:02.918 [81/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:02.918 [82/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:02.918 [83/745] Linking static target lib/librte_ring.a 00:01:02.918 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:02.918 [85/745] Generating lib/rte_pci_def with a custom command 00:01:02.918 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:02.918 [87/745] Linking static target lib/librte_meter.a 00:01:02.918 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:02.918 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:03.178 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:03.178 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:03.178 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:03.178 [93/745] Linking static target lib/librte_pci.a 00:01:03.178 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:03.178 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:03.178 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:03.178 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:03.178 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:03.443 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.443 [100/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.443 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:03.443 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:03.443 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:03.443 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:03.443 [105/745] Generating lib/rte_cmdline_def with a custom command 00:01:03.443 [106/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.443 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:03.443 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:03.443 [109/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:03.443 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:03.443 [111/745] Linking static target lib/librte_telemetry.a 00:01:03.443 [112/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:03.443 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:03.443 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:03.443 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:03.443 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:03.443 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:03.443 [118/745] Generating lib/rte_hash_def with a custom command 00:01:03.443 [119/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:03.443 [120/745] Generating lib/rte_hash_mingw with a custom command 00:01:03.443 [121/745] Generating lib/rte_timer_def with a custom command 00:01:03.443 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:03.705 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:03.705 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:03.705 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:03.967 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:03.967 [127/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:03.967 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:03.967 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:03.967 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:03.967 [131/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:03.967 [132/745] Generating lib/rte_acl_def with a custom command 00:01:03.967 [133/745] Generating lib/rte_acl_mingw with a custom command 00:01:03.967 [134/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:03.967 [135/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:03.967 [136/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.967 [137/745] Generating lib/rte_bitratestats_def with a custom command 00:01:03.967 [138/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:03.967 [139/745] Generating lib/rte_bbdev_def with a custom command 00:01:03.967 [140/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:03.967 [141/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:03.967 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:03.967 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:03.967 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:03.967 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:03.967 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:04.227 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:04.227 [148/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:04.227 [149/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:04.227 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:04.227 [151/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:04.227 [152/745] Generating lib/rte_bpf_mingw with a custom command 00:01:04.227 [153/745] Generating lib/rte_cfgfile_def with a custom command 00:01:04.227 [154/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:04.227 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:04.227 [156/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:04.227 [157/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:04.227 [158/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:04.227 [159/745] Generating lib/rte_compressdev_def with a custom command 00:01:04.227 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:04.227 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:04.227 [162/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:04.227 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:04.227 [164/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:04.227 [165/745] Linking static target lib/librte_rcu.a 00:01:04.227 [166/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:04.227 [167/745] Generating lib/rte_cryptodev_def with a custom command 00:01:04.492 [168/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:04.492 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:04.492 [170/745] Linking static target lib/librte_timer.a 00:01:04.492 [171/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:04.492 [172/745] Generating lib/rte_distributor_def with a custom command 00:01:04.492 [173/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:04.492 [174/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:04.492 [175/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:04.492 [176/745] Generating lib/rte_efd_def with a custom command 00:01:04.492 [177/745] Generating lib/rte_distributor_mingw with a custom command 00:01:04.492 [178/745] Linking static target lib/librte_cmdline.a 00:01:04.492 [179/745] Linking static target lib/librte_net.a 00:01:04.492 [180/745] Generating lib/rte_efd_mingw with a custom command 00:01:04.492 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:04.760 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:04.760 [183/745] Linking static target lib/librte_mempool.a 00:01:04.760 [184/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:04.760 [185/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:04.760 [186/745] Linking static target lib/librte_cfgfile.a 00:01:04.760 [187/745] Linking static target lib/librte_metrics.a 00:01:04.760 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.760 [189/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.022 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:05.022 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.022 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:05.022 [193/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:05.022 [194/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:05.022 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:05.022 [196/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:05.022 [197/745] Linking static target lib/librte_eal.a 00:01:05.022 [198/745] Generating lib/rte_gpudev_def with a custom command 00:01:05.022 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:05.283 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:05.283 [201/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:05.283 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:05.283 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:05.283 [204/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:05.283 [205/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.283 [206/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:05.283 [207/745] Linking static target lib/librte_bitratestats.a 00:01:05.283 [208/745] Generating lib/rte_gro_mingw with a custom command 00:01:05.283 [209/745] Generating lib/rte_gro_def with a custom command 00:01:05.283 [210/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.283 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:05.547 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:05.547 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:05.547 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:05.547 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:05.547 [216/745] Generating lib/rte_gso_def with a custom command 00:01:05.547 [217/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.547 [218/745] Generating lib/rte_gso_mingw with a custom command 00:01:05.547 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:05.811 [220/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:05.811 [221/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:05.811 [222/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.811 [223/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:05.811 [224/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:05.811 [225/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.811 [226/745] Generating lib/rte_ip_frag_def with a custom command 00:01:05.811 [227/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:05.811 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:05.811 [229/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:05.811 [230/745] Linking static target lib/librte_bbdev.a 00:01:05.811 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:05.811 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:05.811 [233/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:05.811 [234/745] Generating lib/rte_latencystats_def with a custom command 00:01:06.073 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:06.073 [236/745] Generating lib/rte_lpm_def with a custom command 00:01:06.073 [237/745] Generating lib/rte_lpm_mingw with a custom command 00:01:06.073 [238/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:06.073 [239/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:06.073 [240/745] Linking static target lib/librte_compressdev.a 00:01:06.073 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:06.073 [242/745] Linking static target lib/librte_jobstats.a 00:01:06.336 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:06.337 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:06.337 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:06.337 [246/745] Linking static target lib/librte_distributor.a 00:01:06.337 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:06.597 [248/745] Generating lib/rte_member_def with a custom command 00:01:06.597 [249/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.597 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:06.597 [251/745] Generating lib/rte_member_mingw with a custom command 00:01:06.597 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:06.597 [253/745] Generating lib/rte_pcapng_def with a custom command 00:01:06.597 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:06.597 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:06.597 [256/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:06.860 [257/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.860 [258/745] Linking static target lib/librte_bpf.a 00:01:06.860 [259/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:06.860 [260/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:06.860 [261/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:06.860 [262/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:06.860 [263/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:06.860 [264/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:06.860 [265/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.860 [266/745] Generating lib/rte_power_def with a custom command 00:01:06.860 [267/745] Generating lib/rte_power_mingw with a custom command 00:01:06.860 [268/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:06.860 [269/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:06.860 [270/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:06.860 [271/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:06.860 [272/745] Linking static target lib/librte_gpudev.a 00:01:06.860 [273/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:06.860 [274/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:06.860 [275/745] Generating lib/rte_rawdev_def with a custom command 00:01:06.860 [276/745] Linking static target lib/librte_gro.a 00:01:06.860 [277/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:06.860 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:06.860 [279/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:07.122 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:07.122 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:07.122 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:07.122 [283/745] Generating lib/rte_rib_def with a custom command 00:01:07.122 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:07.122 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:07.122 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:07.122 [287/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:07.122 [288/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.122 [289/745] Generating lib/rte_reorder_mingw with a custom command 00:01:07.384 [290/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:07.384 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.384 [292/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:07.384 [293/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:07.384 [294/745] Generating lib/rte_sched_def with a custom command 00:01:07.384 [295/745] Generating lib/rte_sched_mingw with a custom command 00:01:07.384 [296/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.384 [297/745] Generating lib/rte_security_def with a custom command 00:01:07.384 [298/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:07.384 [299/745] Generating lib/rte_security_mingw with a custom command 00:01:07.384 [300/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:07.384 [301/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:07.384 [302/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:07.384 [303/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:07.384 [304/745] Linking static target lib/librte_latencystats.a 00:01:07.384 [305/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:07.384 [306/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:07.384 [307/745] Generating lib/rte_stack_def with a custom command 00:01:07.384 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:07.648 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:07.648 [310/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:07.648 [311/745] Linking static target lib/librte_rawdev.a 00:01:07.648 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:07.648 [313/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:07.648 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:07.648 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:07.648 [316/745] Linking static target lib/librte_stack.a 00:01:07.648 [317/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:07.648 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:07.648 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:07.648 [320/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:07.648 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:07.648 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:07.648 [323/745] Linking static target lib/librte_dmadev.a 00:01:07.913 [324/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.913 [325/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:07.913 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:07.913 [327/745] Linking static target lib/librte_ip_frag.a 00:01:07.913 [328/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:07.913 [329/745] Generating lib/rte_ipsec_def with a custom command 00:01:07.913 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:07.913 [331/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.913 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:08.176 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:08.176 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.176 [335/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.176 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:08.176 [337/745] Generating lib/rte_fib_def with a custom command 00:01:08.439 [338/745] Generating lib/rte_fib_mingw with a custom command 00:01:08.439 [339/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.439 [340/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:08.439 [341/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:08.439 [342/745] Linking static target lib/librte_gso.a 00:01:08.439 [343/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.439 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:08.439 [345/745] Linking static target lib/librte_regexdev.a 00:01:08.702 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.702 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:08.702 [348/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.702 [349/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:08.702 [350/745] Linking static target lib/librte_efd.a 00:01:08.702 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:08.702 [352/745] Linking static target lib/librte_pcapng.a 00:01:08.962 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:08.962 [354/745] Linking static target lib/librte_lpm.a 00:01:08.962 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:08.962 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:08.962 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.962 [358/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.962 [359/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:08.962 [360/745] Linking static target lib/librte_reorder.a 00:01:08.962 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:09.232 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.232 [363/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:09.232 [364/745] Linking static target lib/acl/libavx2_tmp.a 00:01:09.232 [365/745] Generating lib/rte_port_def with a custom command 00:01:09.233 [366/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:09.233 [367/745] Generating lib/rte_port_mingw with a custom command 00:01:09.233 [368/745] Generating lib/rte_pdump_def with a custom command 00:01:09.233 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:09.233 [370/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:09.233 [371/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:09.233 [372/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:09.233 [373/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.233 [374/745] Generating lib/rte_pdump_mingw with a custom command 00:01:09.233 [375/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:09.233 [376/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:09.233 [377/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:09.494 [378/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.494 [379/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.494 [380/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:09.494 [381/745] Linking static target lib/librte_security.a 00:01:09.494 [382/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:09.494 [383/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:09.494 [384/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:09.494 [385/745] Linking static target lib/librte_power.a 00:01:09.494 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:09.494 [387/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:09.494 [388/745] Linking static target lib/librte_hash.a 00:01:09.494 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.757 [390/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:09.757 [391/745] Linking static target lib/acl/libavx512_tmp.a 00:01:09.757 [392/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:09.757 [393/745] Linking static target lib/librte_acl.a 00:01:09.757 [394/745] Linking static target lib/librte_rib.a 00:01:09.757 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:10.022 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:10.022 [397/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:10.022 [398/745] Generating lib/rte_table_def with a custom command 00:01:10.022 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:10.022 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.022 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:10.289 [402/745] Linking static target lib/librte_ethdev.a 00:01:10.289 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:10.289 [404/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.289 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.551 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:10.551 [407/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:10.551 [408/745] Linking static target lib/librte_mbuf.a 00:01:10.551 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:10.551 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:10.551 [411/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:10.551 [412/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:10.551 [413/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:10.551 [414/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.551 [415/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:10.551 [416/745] Generating lib/rte_pipeline_def with a custom command 00:01:10.551 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:10.551 [418/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:10.816 [419/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:10.816 [420/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:10.816 [421/745] Generating lib/rte_graph_mingw with a custom command 00:01:10.816 [422/745] Generating lib/rte_graph_def with a custom command 00:01:10.816 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:10.816 [424/745] Linking static target lib/librte_fib.a 00:01:10.816 [425/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:10.816 [426/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.084 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:11.085 [428/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:11.085 [429/745] Linking static target lib/librte_member.a 00:01:11.085 [430/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:11.085 [431/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:11.085 [432/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:11.085 [433/745] Generating lib/rte_node_def with a custom command 00:01:11.085 [434/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:11.085 [435/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:11.085 [436/745] Linking static target lib/librte_eventdev.a 00:01:11.085 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:11.085 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:11.085 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:11.346 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.346 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:11.346 [442/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.346 [443/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:11.346 [444/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:11.346 [445/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:11.346 [446/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:11.346 [447/745] Linking static target lib/librte_sched.a 00:01:11.346 [448/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:11.607 [449/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:11.607 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:11.607 [451/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:11.607 [452/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:11.607 [453/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.607 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:11.608 [455/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:11.608 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:11.608 [457/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:11.608 [458/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:11.608 [459/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:11.608 [460/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:11.608 [461/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:11.608 [462/745] Linking static target lib/librte_pdump.a 00:01:11.876 [463/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:11.876 [464/745] Linking static target lib/librte_cryptodev.a 00:01:11.876 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:11.876 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:11.876 [467/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:11.876 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:11.876 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:11.876 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:11.876 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:11.876 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:12.140 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:12.140 [474/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:12.140 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:12.140 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.140 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:12.140 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:12.140 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:12.140 [480/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:12.140 [481/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:12.140 [482/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.140 [483/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:12.402 [484/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:12.402 [485/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:12.402 [486/745] Linking static target lib/librte_table.a 00:01:12.402 [487/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.402 [488/745] Linking static target drivers/librte_bus_vdev.a 00:01:12.402 [489/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:12.402 [490/745] Linking static target lib/librte_ipsec.a 00:01:12.664 [491/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:12.664 [492/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.664 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:12.664 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:12.664 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.929 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:12.929 [497/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:12.929 [498/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:12.929 [499/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.929 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:12.929 [501/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:12.929 [502/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:12.929 [503/745] Linking static target lib/librte_graph.a 00:01:12.929 [504/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:13.194 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:13.194 [506/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:13.194 [507/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.194 [508/745] Linking static target drivers/librte_bus_pci.a 00:01:13.194 [509/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:13.194 [510/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:13.194 [511/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.455 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:13.455 [513/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.455 [514/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:13.718 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:13.718 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.976 [517/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:13.976 [518/745] Linking static target lib/librte_port.a 00:01:13.976 [519/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:13.976 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:13.976 [521/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.239 [522/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:14.239 [523/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:14.239 [524/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:14.239 [525/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:14.239 [526/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:14.505 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.505 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:14.505 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:14.505 [530/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:14.505 [531/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:14.505 [532/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:14.772 [533/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:14.772 [534/745] Linking static target drivers/librte_mempool_ring.a 00:01:14.772 [535/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:14.772 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:14.772 [537/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.772 [538/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:14.772 [539/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:14.772 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:15.037 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.301 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:15.301 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:15.564 [544/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:15.564 [545/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:15.564 [546/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:15.564 [547/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:15.564 [548/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:15.564 [549/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:15.564 [550/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:15.828 [551/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:15.828 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:16.092 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:16.092 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:16.092 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:16.358 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:16.358 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:16.358 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:16.618 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:16.618 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:16.879 [561/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:16.879 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:16.879 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:16.879 [564/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:16.879 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:16.879 [566/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:17.142 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:17.142 [568/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:17.142 [569/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:17.142 [570/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.142 [571/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:17.142 [572/745] Linking target lib/librte_eal.so.23.0 00:01:17.142 [573/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:17.407 [574/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:17.407 [575/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:17.407 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:17.407 [577/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:17.407 [578/745] Linking target lib/librte_ring.so.23.0 00:01:17.670 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:17.670 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:17.670 [581/745] Linking target lib/librte_meter.so.23.0 00:01:17.670 [582/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:17.670 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:17.670 [584/745] Linking target lib/librte_pci.so.23.0 00:01:17.670 [585/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.670 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:17.670 [587/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:17.670 [588/745] Linking target lib/librte_timer.so.23.0 00:01:17.670 [589/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:17.670 [590/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:17.670 [591/745] Linking target lib/librte_cfgfile.so.23.0 00:01:17.670 [592/745] Linking target lib/librte_jobstats.so.23.0 00:01:17.670 [593/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:17.670 [594/745] Linking target lib/librte_acl.so.23.0 00:01:17.670 [595/745] Linking target lib/librte_rawdev.so.23.0 00:01:17.670 [596/745] Linking target lib/librte_rcu.so.23.0 00:01:17.670 [597/745] Linking target lib/librte_mempool.so.23.0 00:01:17.934 [598/745] Linking target lib/librte_stack.so.23.0 00:01:17.934 [599/745] Linking target lib/librte_dmadev.so.23.0 00:01:17.934 [600/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:17.934 [601/745] Linking target lib/librte_graph.so.23.0 00:01:17.934 [602/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:17.934 [603/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:17.934 [604/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:17.934 [605/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:17.934 [606/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:17.934 [607/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:17.934 [608/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:17.934 [609/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:17.934 [610/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:18.200 [611/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:18.200 [612/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:18.200 [613/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:18.200 [614/745] Linking target lib/librte_rib.so.23.0 00:01:18.200 [615/745] Linking target lib/librte_mbuf.so.23.0 00:01:18.200 [616/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:18.200 [617/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:18.200 [618/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:18.200 [619/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:18.200 [620/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:18.200 [621/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:18.460 [622/745] Linking target lib/librte_fib.so.23.0 00:01:18.460 [623/745] Linking target lib/librte_net.so.23.0 00:01:18.460 [624/745] Linking target lib/librte_bbdev.so.23.0 00:01:18.460 [625/745] Linking target lib/librte_compressdev.so.23.0 00:01:18.460 [626/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:18.720 [627/745] Linking target lib/librte_cryptodev.so.23.0 00:01:18.720 [628/745] Linking target lib/librte_distributor.so.23.0 00:01:18.720 [629/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:18.720 [630/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:18.720 [631/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:18.720 [632/745] Linking target lib/librte_gpudev.so.23.0 00:01:18.720 [633/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:18.720 [634/745] Linking target lib/librte_reorder.so.23.0 00:01:18.720 [635/745] Linking target lib/librte_regexdev.so.23.0 00:01:18.720 [636/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:18.720 [637/745] Linking target lib/librte_ethdev.so.23.0 00:01:18.720 [638/745] Linking target lib/librte_hash.so.23.0 00:01:18.720 [639/745] Linking target lib/librte_sched.so.23.0 00:01:18.720 [640/745] Linking target lib/librte_cmdline.so.23.0 00:01:18.720 [641/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:18.720 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:18.720 [643/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:18.720 [644/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:18.981 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:18.981 [646/745] Linking target lib/librte_security.so.23.0 00:01:18.981 [647/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:18.981 [648/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:18.981 [649/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:18.981 [650/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:18.981 [651/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:18.981 [652/745] Linking target lib/librte_efd.so.23.0 00:01:18.981 [653/745] Linking target lib/librte_lpm.so.23.0 00:01:18.981 [654/745] Linking target lib/librte_member.so.23.0 00:01:18.981 [655/745] Linking target lib/librte_pcapng.so.23.0 00:01:18.981 [656/745] Linking target lib/librte_gso.so.23.0 00:01:18.981 [657/745] Linking target lib/librte_ip_frag.so.23.0 00:01:18.981 [658/745] Linking target lib/librte_gro.so.23.0 00:01:18.981 [659/745] Linking target lib/librte_bpf.so.23.0 00:01:18.981 [660/745] Linking target lib/librte_metrics.so.23.0 00:01:19.240 [661/745] Linking target lib/librte_eventdev.so.23.0 00:01:19.240 [662/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:19.240 [663/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:19.241 [664/745] Linking target lib/librte_power.so.23.0 00:01:19.241 [665/745] Linking target lib/librte_ipsec.so.23.0 00:01:19.241 [666/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:19.241 [667/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:19.241 [668/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:19.241 [669/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:19.241 [670/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:19.241 [671/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:19.241 [672/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:19.241 [673/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:19.241 [674/745] Linking target lib/librte_bitratestats.so.23.0 00:01:19.241 [675/745] Linking target lib/librte_latencystats.so.23.0 00:01:19.241 [676/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:19.241 [677/745] Linking target lib/librte_pdump.so.23.0 00:01:19.501 [678/745] Linking target lib/librte_port.so.23.0 00:01:19.501 [679/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:19.501 [680/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:19.501 [681/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:19.501 [682/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:19.501 [683/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:19.501 [684/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:19.501 [685/745] Linking target lib/librte_table.so.23.0 00:01:19.760 [686/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:19.760 [687/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:19.760 [688/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:19.760 [689/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:19.760 [690/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:19.760 [691/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:19.760 [692/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:20.019 [693/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:20.277 [694/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:20.277 [695/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:20.277 [696/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:20.535 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:20.535 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:20.794 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:20.794 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:20.794 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:20.794 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:21.053 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:21.054 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:21.312 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:21.312 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:21.312 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:21.312 [708/745] Linking static target drivers/librte_net_i40e.a 00:01:21.878 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:21.878 [710/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.878 [711/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:22.137 [712/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:23.515 [713/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:23.515 [714/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:23.515 [715/745] Linking static target lib/librte_node.a 00:01:23.515 [716/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:23.515 [717/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.774 [718/745] Linking target lib/librte_node.so.23.0 00:01:24.344 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:32.530 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:04.604 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:04.604 [722/745] Linking static target lib/librte_vhost.a 00:02:04.604 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.604 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:16.824 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:16.824 [726/745] Linking static target lib/librte_pipeline.a 00:02:16.824 [727/745] Linking target app/dpdk-test-regex 00:02:16.824 [728/745] Linking target app/dpdk-dumpcap 00:02:16.824 [729/745] Linking target app/dpdk-test-bbdev 00:02:16.824 [730/745] Linking target app/dpdk-test-sad 00:02:16.824 [731/745] Linking target app/dpdk-test-gpudev 00:02:16.824 [732/745] Linking target app/dpdk-test-security-perf 00:02:16.824 [733/745] Linking target app/dpdk-test-fib 00:02:16.824 [734/745] Linking target app/dpdk-test-eventdev 00:02:16.824 [735/745] Linking target app/dpdk-test-compress-perf 00:02:16.824 [736/745] Linking target app/dpdk-test-pipeline 00:02:16.824 [737/745] Linking target app/dpdk-test-flow-perf 00:02:16.824 [738/745] Linking target app/dpdk-test-cmdline 00:02:16.824 [739/745] Linking target app/dpdk-pdump 00:02:16.824 [740/745] Linking target app/dpdk-test-acl 00:02:16.824 [741/745] Linking target app/dpdk-proc-info 00:02:16.824 [742/745] Linking target app/dpdk-test-crypto-perf 00:02:16.824 [743/745] Linking target app/dpdk-testpmd 00:02:17.759 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.759 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:17.759 19:11:15 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:17.759 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:17.759 [0/1] Installing files. 00:02:18.023 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:18.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:18.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:18.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:18.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:18.023 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:18.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:18.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:18.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:18.030 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.030 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.291 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.292 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.554 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.554 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.554 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.554 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.554 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:18.554 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.554 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:18.554 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.554 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:18.554 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:18.554 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:18.554 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.554 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.555 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.556 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.557 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:18.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:18.820 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:18.820 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:18.820 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:18.820 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:18.820 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:18.820 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:18.820 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:18.820 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:18.820 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:18.820 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:18.820 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:18.820 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:18.820 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:18.820 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:18.820 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:18.820 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:18.821 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:18.821 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:18.821 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:18.821 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:18.821 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:18.821 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:18.821 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:18.821 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:18.821 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:18.821 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:18.821 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:18.821 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:18.821 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:18.821 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:18.821 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:18.821 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:18.821 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:18.821 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:18.821 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:18.821 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:18.821 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:18.821 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:18.821 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:18.821 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:18.821 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:18.821 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:18.821 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:18.821 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:18.821 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:18.821 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:18.821 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:18.821 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:18.821 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:18.821 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:18.821 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:18.821 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:18.821 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:18.821 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:18.821 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:18.821 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:18.821 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:18.821 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:18.821 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:18.821 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:18.821 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:18.821 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:18.821 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:18.821 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:18.821 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:18.821 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:18.821 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:18.821 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:18.821 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:18.821 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:18.821 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:18.821 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:18.821 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:18.821 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:18.821 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:18.821 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:18.821 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:18.821 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:18.821 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:18.821 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:18.821 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:18.821 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:18.821 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:18.821 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:18.821 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:18.821 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:18.821 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:18.821 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:18.821 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:18.821 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:18.821 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:18.821 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:18.821 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:18.822 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:18.822 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:18.822 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:18.822 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:18.822 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:18.822 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:18.822 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:18.822 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:18.822 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:18.822 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:18.822 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:18.822 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:18.822 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:18.822 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:18.822 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:18.822 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:18.822 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:18.822 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:18.822 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:18.822 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:18.822 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:18.822 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:18.822 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:18.822 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:18.822 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:18.822 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:18.822 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:18.822 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:18.822 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:18.822 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:18.822 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:18.822 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:18.822 19:11:16 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:18.822 19:11:16 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:18.822 19:11:16 -- common/autobuild_common.sh@203 -- $ cat 00:02:18.822 19:11:16 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.822 00:02:18.822 real 1m22.830s 00:02:18.822 user 14m24.493s 00:02:18.822 sys 1m49.872s 00:02:18.822 19:11:16 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:18.822 19:11:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.822 ************************************ 00:02:18.822 END TEST build_native_dpdk 00:02:18.822 ************************************ 00:02:18.822 19:11:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:18.822 19:11:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:18.822 19:11:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:18.822 19:11:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:18.822 19:11:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:18.822 19:11:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:18.822 19:11:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:18.822 19:11:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:18.822 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:19.083 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:19.083 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:19.083 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:19.343 Using 'verbs' RDMA provider 00:02:29.590 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:39.576 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:39.576 Creating mk/config.mk...done. 00:02:39.576 Creating mk/cc.flags.mk...done. 00:02:39.576 Type 'make' to build. 00:02:39.576 19:11:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:39.576 19:11:36 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:39.576 19:11:36 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:39.576 19:11:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.576 ************************************ 00:02:39.576 START TEST make 00:02:39.576 ************************************ 00:02:39.576 19:11:36 -- common/autotest_common.sh@1114 -- $ make -j48 00:02:39.576 make[1]: Nothing to be done for 'all'. 00:02:40.522 The Meson build system 00:02:40.522 Version: 1.5.0 00:02:40.522 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:40.522 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:40.522 Build type: native build 00:02:40.522 Project name: libvfio-user 00:02:40.522 Project version: 0.0.1 00:02:40.522 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.522 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:40.522 Host machine cpu family: x86_64 00:02:40.522 Host machine cpu: x86_64 00:02:40.522 Run-time dependency threads found: YES 00:02:40.522 Library dl found: YES 00:02:40.522 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.522 Run-time dependency json-c found: YES 0.17 00:02:40.522 Run-time dependency cmocka found: YES 1.1.7 00:02:40.522 Program pytest-3 found: NO 00:02:40.522 Program flake8 found: NO 00:02:40.522 Program misspell-fixer found: NO 00:02:40.522 Program restructuredtext-lint found: NO 00:02:40.522 Program valgrind found: YES (/usr/bin/valgrind) 00:02:40.522 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.522 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.522 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.522 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.522 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:40.522 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:40.522 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.522 Build targets in project: 8 00:02:40.522 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:40.522 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:40.522 00:02:40.522 libvfio-user 0.0.1 00:02:40.522 00:02:40.522 User defined options 00:02:40.522 buildtype : debug 00:02:40.522 default_library: shared 00:02:40.522 libdir : /usr/local/lib 00:02:40.522 00:02:40.522 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.099 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:41.366 [1/37] Compiling C object samples/null.p/null.c.o 00:02:41.366 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:41.366 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:41.366 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:41.366 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:41.366 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:41.366 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:41.366 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:41.631 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:41.631 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:41.631 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:41.631 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:41.631 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:41.631 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:41.631 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:41.631 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:41.631 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:41.631 [18/37] Compiling C object samples/server.p/server.c.o 00:02:41.631 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:41.631 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:41.631 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:41.631 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:41.631 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:41.631 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:41.631 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:41.631 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:41.631 [27/37] Compiling C object samples/client.p/client.c.o 00:02:41.631 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:41.631 [29/37] Linking target samples/client 00:02:41.631 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:41.891 [31/37] Linking target test/unit_tests 00:02:41.891 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:41.891 [33/37] Linking target samples/server 00:02:41.891 [34/37] Linking target samples/gpio-pci-idio-16 00:02:41.891 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:41.891 [36/37] Linking target samples/null 00:02:41.891 [37/37] Linking target samples/lspci 00:02:41.891 INFO: autodetecting backend as ninja 00:02:41.891 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:42.153 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:43.101 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:43.101 ninja: no work to do. 00:02:55.301 CC lib/log/log.o 00:02:55.302 CC lib/ut_mock/mock.o 00:02:55.302 CC lib/log/log_flags.o 00:02:55.302 CC lib/log/log_deprecated.o 00:02:55.302 CC lib/ut/ut.o 00:02:55.302 LIB libspdk_ut_mock.a 00:02:55.302 SO libspdk_ut_mock.so.5.0 00:02:55.302 LIB libspdk_log.a 00:02:55.302 LIB libspdk_ut.a 00:02:55.302 SO libspdk_log.so.6.1 00:02:55.302 SO libspdk_ut.so.1.0 00:02:55.302 SYMLINK libspdk_ut_mock.so 00:02:55.302 SYMLINK libspdk_ut.so 00:02:55.302 SYMLINK libspdk_log.so 00:02:55.302 CC lib/dma/dma.o 00:02:55.302 CC lib/ioat/ioat.o 00:02:55.302 CC lib/util/base64.o 00:02:55.302 CXX lib/trace_parser/trace.o 00:02:55.302 CC lib/util/bit_array.o 00:02:55.302 CC lib/util/cpuset.o 00:02:55.302 CC lib/util/crc16.o 00:02:55.302 CC lib/util/crc32.o 00:02:55.302 CC lib/util/crc32c.o 00:02:55.302 CC lib/util/crc32_ieee.o 00:02:55.302 CC lib/util/crc64.o 00:02:55.302 CC lib/util/dif.o 00:02:55.302 CC lib/util/fd.o 00:02:55.302 CC lib/util/file.o 00:02:55.302 CC lib/util/hexlify.o 00:02:55.302 CC lib/util/iov.o 00:02:55.302 CC lib/util/math.o 00:02:55.302 CC lib/util/pipe.o 00:02:55.302 CC lib/util/strerror_tls.o 00:02:55.302 CC lib/util/string.o 00:02:55.302 CC lib/util/uuid.o 00:02:55.302 CC lib/util/xor.o 00:02:55.302 CC lib/util/fd_group.o 00:02:55.302 CC lib/util/zipf.o 00:02:55.302 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.302 CC lib/vfio_user/host/vfio_user.o 00:02:55.302 LIB libspdk_dma.a 00:02:55.302 SO libspdk_dma.so.3.0 00:02:55.302 SYMLINK libspdk_dma.so 00:02:55.302 LIB libspdk_ioat.a 00:02:55.302 SO libspdk_ioat.so.6.0 00:02:55.302 SYMLINK libspdk_ioat.so 00:02:55.302 LIB libspdk_vfio_user.a 00:02:55.302 SO libspdk_vfio_user.so.4.0 00:02:55.302 SYMLINK libspdk_vfio_user.so 00:02:55.302 LIB libspdk_util.a 00:02:55.302 SO libspdk_util.so.8.0 00:02:55.302 SYMLINK libspdk_util.so 00:02:55.302 CC lib/vmd/vmd.o 00:02:55.302 CC lib/rdma/common.o 00:02:55.302 CC lib/json/json_parse.o 00:02:55.302 CC lib/idxd/idxd.o 00:02:55.302 CC lib/vmd/led.o 00:02:55.302 CC lib/conf/conf.o 00:02:55.302 CC lib/rdma/rdma_verbs.o 00:02:55.302 CC lib/env_dpdk/env.o 00:02:55.302 CC lib/idxd/idxd_user.o 00:02:55.302 CC lib/json/json_util.o 00:02:55.302 CC lib/env_dpdk/memory.o 00:02:55.302 CC lib/idxd/idxd_kernel.o 00:02:55.302 CC lib/json/json_write.o 00:02:55.302 CC lib/env_dpdk/pci.o 00:02:55.302 CC lib/env_dpdk/init.o 00:02:55.302 CC lib/env_dpdk/threads.o 00:02:55.302 CC lib/env_dpdk/pci_ioat.o 00:02:55.302 CC lib/env_dpdk/pci_virtio.o 00:02:55.302 CC lib/env_dpdk/pci_vmd.o 00:02:55.302 CC lib/env_dpdk/pci_idxd.o 00:02:55.302 CC lib/env_dpdk/pci_event.o 00:02:55.302 CC lib/env_dpdk/sigbus_handler.o 00:02:55.302 CC lib/env_dpdk/pci_dpdk.o 00:02:55.302 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.302 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.302 LIB libspdk_trace_parser.a 00:02:55.302 SO libspdk_trace_parser.so.4.0 00:02:55.302 SYMLINK libspdk_trace_parser.so 00:02:55.302 LIB libspdk_conf.a 00:02:55.302 SO libspdk_conf.so.5.0 00:02:55.302 LIB libspdk_json.a 00:02:55.302 SYMLINK libspdk_conf.so 00:02:55.302 SO libspdk_json.so.5.1 00:02:55.302 SYMLINK libspdk_json.so 00:02:55.302 LIB libspdk_rdma.a 00:02:55.302 SO libspdk_rdma.so.5.0 00:02:55.302 SYMLINK libspdk_rdma.so 00:02:55.302 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.302 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.302 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.302 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.302 LIB libspdk_idxd.a 00:02:55.302 SO libspdk_idxd.so.11.0 00:02:55.302 SYMLINK libspdk_idxd.so 00:02:55.302 LIB libspdk_vmd.a 00:02:55.302 SO libspdk_vmd.so.5.0 00:02:55.302 SYMLINK libspdk_vmd.so 00:02:55.302 LIB libspdk_jsonrpc.a 00:02:55.560 SO libspdk_jsonrpc.so.5.1 00:02:55.560 SYMLINK libspdk_jsonrpc.so 00:02:55.560 CC lib/rpc/rpc.o 00:02:55.819 LIB libspdk_rpc.a 00:02:55.819 SO libspdk_rpc.so.5.0 00:02:55.819 SYMLINK libspdk_rpc.so 00:02:56.077 CC lib/sock/sock.o 00:02:56.077 CC lib/sock/sock_rpc.o 00:02:56.077 CC lib/trace/trace.o 00:02:56.077 CC lib/trace/trace_flags.o 00:02:56.077 CC lib/trace/trace_rpc.o 00:02:56.077 CC lib/notify/notify.o 00:02:56.077 CC lib/notify/notify_rpc.o 00:02:56.077 LIB libspdk_notify.a 00:02:56.077 SO libspdk_notify.so.5.0 00:02:56.336 SYMLINK libspdk_notify.so 00:02:56.336 LIB libspdk_trace.a 00:02:56.336 SO libspdk_trace.so.9.0 00:02:56.336 SYMLINK libspdk_trace.so 00:02:56.336 LIB libspdk_sock.a 00:02:56.336 SO libspdk_sock.so.8.0 00:02:56.336 CC lib/thread/thread.o 00:02:56.336 CC lib/thread/iobuf.o 00:02:56.336 SYMLINK libspdk_sock.so 00:02:56.594 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.594 CC lib/nvme/nvme_ctrlr.o 00:02:56.594 CC lib/nvme/nvme_fabric.o 00:02:56.594 CC lib/nvme/nvme_ns_cmd.o 00:02:56.594 CC lib/nvme/nvme_ns.o 00:02:56.595 CC lib/nvme/nvme_pcie_common.o 00:02:56.595 CC lib/nvme/nvme_pcie.o 00:02:56.595 CC lib/nvme/nvme_qpair.o 00:02:56.595 CC lib/nvme/nvme.o 00:02:56.595 CC lib/nvme/nvme_quirks.o 00:02:56.595 CC lib/nvme/nvme_transport.o 00:02:56.595 CC lib/nvme/nvme_discovery.o 00:02:56.595 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:56.595 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:56.595 CC lib/nvme/nvme_tcp.o 00:02:56.595 CC lib/nvme/nvme_opal.o 00:02:56.595 CC lib/nvme/nvme_io_msg.o 00:02:56.595 CC lib/nvme/nvme_poll_group.o 00:02:56.595 CC lib/nvme/nvme_zns.o 00:02:56.595 CC lib/nvme/nvme_vfio_user.o 00:02:56.595 CC lib/nvme/nvme_cuse.o 00:02:56.595 CC lib/nvme/nvme_rdma.o 00:02:56.595 LIB libspdk_env_dpdk.a 00:02:56.595 SO libspdk_env_dpdk.so.13.0 00:02:56.853 SYMLINK libspdk_env_dpdk.so 00:02:58.265 LIB libspdk_thread.a 00:02:58.265 SO libspdk_thread.so.9.0 00:02:58.265 SYMLINK libspdk_thread.so 00:02:58.265 CC lib/blob/blobstore.o 00:02:58.265 CC lib/blob/request.o 00:02:58.265 CC lib/init/json_config.o 00:02:58.265 CC lib/blob/zeroes.o 00:02:58.265 CC lib/accel/accel.o 00:02:58.266 CC lib/vfu_tgt/tgt_endpoint.o 00:02:58.266 CC lib/virtio/virtio.o 00:02:58.266 CC lib/init/subsystem.o 00:02:58.266 CC lib/accel/accel_rpc.o 00:02:58.266 CC lib/vfu_tgt/tgt_rpc.o 00:02:58.266 CC lib/virtio/virtio_vhost_user.o 00:02:58.266 CC lib/blob/blob_bs_dev.o 00:02:58.266 CC lib/accel/accel_sw.o 00:02:58.266 CC lib/init/subsystem_rpc.o 00:02:58.266 CC lib/virtio/virtio_vfio_user.o 00:02:58.266 CC lib/init/rpc.o 00:02:58.266 CC lib/virtio/virtio_pci.o 00:02:58.552 LIB libspdk_init.a 00:02:58.552 SO libspdk_init.so.4.0 00:02:58.552 SYMLINK libspdk_init.so 00:02:58.552 LIB libspdk_virtio.a 00:02:58.552 LIB libspdk_vfu_tgt.a 00:02:58.552 SO libspdk_virtio.so.6.0 00:02:58.552 SO libspdk_vfu_tgt.so.2.0 00:02:58.552 SYMLINK libspdk_vfu_tgt.so 00:02:58.552 SYMLINK libspdk_virtio.so 00:02:58.552 CC lib/event/app.o 00:02:58.552 CC lib/event/reactor.o 00:02:58.552 CC lib/event/log_rpc.o 00:02:58.552 CC lib/event/app_rpc.o 00:02:58.552 CC lib/event/scheduler_static.o 00:02:59.118 LIB libspdk_nvme.a 00:02:59.118 SO libspdk_nvme.so.12.0 00:02:59.118 LIB libspdk_event.a 00:02:59.118 SO libspdk_event.so.12.0 00:02:59.118 SYMLINK libspdk_event.so 00:02:59.376 SYMLINK libspdk_nvme.so 00:02:59.376 LIB libspdk_accel.a 00:02:59.376 SO libspdk_accel.so.14.0 00:02:59.376 SYMLINK libspdk_accel.so 00:02:59.376 CC lib/bdev/bdev.o 00:02:59.376 CC lib/bdev/bdev_rpc.o 00:02:59.376 CC lib/bdev/bdev_zone.o 00:02:59.376 CC lib/bdev/scsi_nvme.o 00:02:59.376 CC lib/bdev/part.o 00:03:01.292 LIB libspdk_blob.a 00:03:01.292 SO libspdk_blob.so.10.1 00:03:01.292 SYMLINK libspdk_blob.so 00:03:01.292 CC lib/blobfs/blobfs.o 00:03:01.292 CC lib/blobfs/tree.o 00:03:01.292 CC lib/lvol/lvol.o 00:03:02.231 LIB libspdk_bdev.a 00:03:02.231 SO libspdk_bdev.so.14.0 00:03:02.231 LIB libspdk_blobfs.a 00:03:02.231 SO libspdk_blobfs.so.9.0 00:03:02.231 LIB libspdk_lvol.a 00:03:02.231 SYMLINK libspdk_bdev.so 00:03:02.231 SYMLINK libspdk_blobfs.so 00:03:02.231 SO libspdk_lvol.so.9.1 00:03:02.231 SYMLINK libspdk_lvol.so 00:03:02.231 CC lib/nbd/nbd.o 00:03:02.231 CC lib/scsi/dev.o 00:03:02.231 CC lib/nvmf/ctrlr.o 00:03:02.231 CC lib/ublk/ublk.o 00:03:02.231 CC lib/nbd/nbd_rpc.o 00:03:02.231 CC lib/scsi/lun.o 00:03:02.231 CC lib/ftl/ftl_core.o 00:03:02.231 CC lib/nvmf/ctrlr_discovery.o 00:03:02.231 CC lib/ublk/ublk_rpc.o 00:03:02.231 CC lib/scsi/port.o 00:03:02.231 CC lib/nvmf/ctrlr_bdev.o 00:03:02.231 CC lib/ftl/ftl_init.o 00:03:02.231 CC lib/scsi/scsi.o 00:03:02.231 CC lib/nvmf/subsystem.o 00:03:02.231 CC lib/ftl/ftl_layout.o 00:03:02.231 CC lib/ftl/ftl_debug.o 00:03:02.231 CC lib/nvmf/nvmf.o 00:03:02.231 CC lib/scsi/scsi_bdev.o 00:03:02.231 CC lib/scsi/scsi_pr.o 00:03:02.231 CC lib/nvmf/nvmf_rpc.o 00:03:02.231 CC lib/ftl/ftl_io.o 00:03:02.231 CC lib/ftl/ftl_sb.o 00:03:02.231 CC lib/nvmf/transport.o 00:03:02.231 CC lib/ftl/ftl_l2p.o 00:03:02.231 CC lib/scsi/task.o 00:03:02.231 CC lib/scsi/scsi_rpc.o 00:03:02.231 CC lib/nvmf/tcp.o 00:03:02.231 CC lib/ftl/ftl_l2p_flat.o 00:03:02.231 CC lib/ftl/ftl_nv_cache.o 00:03:02.231 CC lib/nvmf/rdma.o 00:03:02.231 CC lib/nvmf/vfio_user.o 00:03:02.231 CC lib/ftl/ftl_band.o 00:03:02.231 CC lib/ftl/ftl_band_ops.o 00:03:02.231 CC lib/ftl/ftl_writer.o 00:03:02.231 CC lib/ftl/ftl_rq.o 00:03:02.231 CC lib/ftl/ftl_reloc.o 00:03:02.231 CC lib/ftl/ftl_l2p_cache.o 00:03:02.231 CC lib/ftl/ftl_p2l.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.231 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.799 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:02.799 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:02.799 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:02.799 CC lib/ftl/utils/ftl_conf.o 00:03:02.799 CC lib/ftl/utils/ftl_md.o 00:03:02.799 CC lib/ftl/utils/ftl_mempool.o 00:03:02.799 CC lib/ftl/utils/ftl_bitmap.o 00:03:02.799 CC lib/ftl/utils/ftl_property.o 00:03:02.799 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:02.799 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:02.799 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:02.799 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:02.799 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:02.799 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:02.799 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:02.799 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:02.799 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:02.799 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:02.799 CC lib/ftl/base/ftl_base_dev.o 00:03:02.799 CC lib/ftl/base/ftl_base_bdev.o 00:03:02.799 CC lib/ftl/ftl_trace.o 00:03:03.057 LIB libspdk_nbd.a 00:03:03.057 SO libspdk_nbd.so.6.0 00:03:03.057 SYMLINK libspdk_nbd.so 00:03:03.057 LIB libspdk_scsi.a 00:03:03.315 SO libspdk_scsi.so.8.0 00:03:03.315 SYMLINK libspdk_scsi.so 00:03:03.315 LIB libspdk_ublk.a 00:03:03.315 SO libspdk_ublk.so.2.0 00:03:03.315 CC lib/iscsi/conn.o 00:03:03.315 CC lib/vhost/vhost.o 00:03:03.315 CC lib/vhost/vhost_rpc.o 00:03:03.315 CC lib/iscsi/init_grp.o 00:03:03.315 CC lib/vhost/vhost_scsi.o 00:03:03.315 CC lib/iscsi/iscsi.o 00:03:03.315 CC lib/vhost/vhost_blk.o 00:03:03.315 CC lib/iscsi/md5.o 00:03:03.315 CC lib/vhost/rte_vhost_user.o 00:03:03.315 CC lib/iscsi/param.o 00:03:03.315 CC lib/iscsi/portal_grp.o 00:03:03.315 CC lib/iscsi/tgt_node.o 00:03:03.315 CC lib/iscsi/iscsi_subsystem.o 00:03:03.315 CC lib/iscsi/iscsi_rpc.o 00:03:03.315 CC lib/iscsi/task.o 00:03:03.574 SYMLINK libspdk_ublk.so 00:03:03.574 LIB libspdk_ftl.a 00:03:03.832 SO libspdk_ftl.so.8.0 00:03:04.091 SYMLINK libspdk_ftl.so 00:03:04.657 LIB libspdk_vhost.a 00:03:04.657 SO libspdk_vhost.so.7.1 00:03:04.657 SYMLINK libspdk_vhost.so 00:03:04.917 LIB libspdk_iscsi.a 00:03:04.917 LIB libspdk_nvmf.a 00:03:04.917 SO libspdk_iscsi.so.7.0 00:03:04.917 SO libspdk_nvmf.so.17.0 00:03:05.175 SYMLINK libspdk_iscsi.so 00:03:05.175 SYMLINK libspdk_nvmf.so 00:03:05.175 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.175 CC module/vfu_device/vfu_virtio.o 00:03:05.175 CC module/vfu_device/vfu_virtio_blk.o 00:03:05.175 CC module/vfu_device/vfu_virtio_scsi.o 00:03:05.175 CC module/vfu_device/vfu_virtio_rpc.o 00:03:05.433 CC module/blob/bdev/blob_bdev.o 00:03:05.433 CC module/accel/error/accel_error.o 00:03:05.433 CC module/accel/dsa/accel_dsa.o 00:03:05.433 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.433 CC module/accel/error/accel_error_rpc.o 00:03:05.433 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.433 CC module/accel/iaa/accel_iaa.o 00:03:05.433 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.433 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.433 CC module/accel/ioat/accel_ioat.o 00:03:05.433 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.433 CC module/accel/ioat/accel_ioat_rpc.o 00:03:05.433 CC module/sock/posix/posix.o 00:03:05.433 LIB libspdk_env_dpdk_rpc.a 00:03:05.433 SO libspdk_env_dpdk_rpc.so.5.0 00:03:05.433 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.433 LIB libspdk_scheduler_gscheduler.a 00:03:05.433 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.433 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:05.433 SO libspdk_scheduler_gscheduler.so.3.0 00:03:05.433 LIB libspdk_accel_error.a 00:03:05.691 LIB libspdk_accel_ioat.a 00:03:05.691 LIB libspdk_scheduler_dynamic.a 00:03:05.691 LIB libspdk_accel_iaa.a 00:03:05.691 SO libspdk_accel_error.so.1.0 00:03:05.691 SO libspdk_accel_ioat.so.5.0 00:03:05.691 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.691 SO libspdk_scheduler_dynamic.so.3.0 00:03:05.691 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.691 SO libspdk_accel_iaa.so.2.0 00:03:05.691 LIB libspdk_accel_dsa.a 00:03:05.691 SYMLINK libspdk_accel_error.so 00:03:05.691 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.691 SYMLINK libspdk_accel_ioat.so 00:03:05.691 SO libspdk_accel_dsa.so.4.0 00:03:05.691 LIB libspdk_blob_bdev.a 00:03:05.691 SYMLINK libspdk_accel_iaa.so 00:03:05.691 SO libspdk_blob_bdev.so.10.1 00:03:05.691 SYMLINK libspdk_accel_dsa.so 00:03:05.691 SYMLINK libspdk_blob_bdev.so 00:03:05.950 CC module/bdev/raid/bdev_raid.o 00:03:05.950 CC module/bdev/error/vbdev_error.o 00:03:05.950 CC module/bdev/delay/vbdev_delay.o 00:03:05.950 CC module/bdev/error/vbdev_error_rpc.o 00:03:05.950 CC module/bdev/raid/bdev_raid_rpc.o 00:03:05.950 CC module/bdev/raid/bdev_raid_sb.o 00:03:05.950 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.950 CC module/bdev/iscsi/bdev_iscsi.o 00:03:05.950 CC module/bdev/null/bdev_null.o 00:03:05.950 CC module/bdev/nvme/bdev_nvme.o 00:03:05.950 CC module/bdev/passthru/vbdev_passthru.o 00:03:05.950 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:05.950 CC module/bdev/raid/raid1.o 00:03:05.950 CC module/bdev/raid/raid0.o 00:03:05.950 CC module/bdev/split/vbdev_split.o 00:03:05.950 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:05.950 CC module/bdev/malloc/bdev_malloc.o 00:03:05.950 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.950 CC module/bdev/null/bdev_null_rpc.o 00:03:05.950 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:05.950 CC module/bdev/raid/concat.o 00:03:05.950 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:05.950 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:05.950 CC module/bdev/nvme/nvme_rpc.o 00:03:05.950 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.950 CC module/bdev/split/vbdev_split_rpc.o 00:03:05.950 CC module/bdev/nvme/bdev_mdns_client.o 00:03:05.950 CC module/bdev/gpt/gpt.o 00:03:05.950 CC module/bdev/nvme/vbdev_opal.o 00:03:05.950 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:05.950 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:05.950 CC module/bdev/gpt/vbdev_gpt.o 00:03:05.950 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:05.950 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:05.950 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:05.950 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:05.950 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:05.950 CC module/bdev/aio/bdev_aio.o 00:03:05.950 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:05.950 CC module/bdev/ftl/bdev_ftl.o 00:03:05.950 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.950 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:05.950 LIB libspdk_vfu_device.a 00:03:05.950 SO libspdk_vfu_device.so.2.0 00:03:06.208 LIB libspdk_sock_posix.a 00:03:06.208 SO libspdk_sock_posix.so.5.0 00:03:06.208 SYMLINK libspdk_vfu_device.so 00:03:06.208 LIB libspdk_bdev_error.a 00:03:06.208 SYMLINK libspdk_sock_posix.so 00:03:06.208 SO libspdk_bdev_error.so.5.0 00:03:06.208 LIB libspdk_blobfs_bdev.a 00:03:06.208 SO libspdk_blobfs_bdev.so.5.0 00:03:06.467 LIB libspdk_bdev_split.a 00:03:06.467 SYMLINK libspdk_bdev_error.so 00:03:06.467 LIB libspdk_bdev_null.a 00:03:06.467 LIB libspdk_bdev_gpt.a 00:03:06.467 SO libspdk_bdev_split.so.5.0 00:03:06.467 LIB libspdk_bdev_ftl.a 00:03:06.467 SYMLINK libspdk_blobfs_bdev.so 00:03:06.467 SO libspdk_bdev_null.so.5.0 00:03:06.467 SO libspdk_bdev_gpt.so.5.0 00:03:06.467 SO libspdk_bdev_ftl.so.5.0 00:03:06.467 LIB libspdk_bdev_passthru.a 00:03:06.467 SYMLINK libspdk_bdev_split.so 00:03:06.467 LIB libspdk_bdev_aio.a 00:03:06.467 SO libspdk_bdev_passthru.so.5.0 00:03:06.467 LIB libspdk_bdev_zone_block.a 00:03:06.467 SYMLINK libspdk_bdev_null.so 00:03:06.467 SYMLINK libspdk_bdev_gpt.so 00:03:06.467 SO libspdk_bdev_aio.so.5.0 00:03:06.467 LIB libspdk_bdev_iscsi.a 00:03:06.467 SYMLINK libspdk_bdev_ftl.so 00:03:06.467 LIB libspdk_bdev_malloc.a 00:03:06.467 LIB libspdk_bdev_delay.a 00:03:06.467 SO libspdk_bdev_zone_block.so.5.0 00:03:06.467 SO libspdk_bdev_malloc.so.5.0 00:03:06.467 SO libspdk_bdev_iscsi.so.5.0 00:03:06.467 SO libspdk_bdev_delay.so.5.0 00:03:06.467 SYMLINK libspdk_bdev_passthru.so 00:03:06.467 SYMLINK libspdk_bdev_aio.so 00:03:06.467 SYMLINK libspdk_bdev_zone_block.so 00:03:06.467 SYMLINK libspdk_bdev_iscsi.so 00:03:06.467 SYMLINK libspdk_bdev_delay.so 00:03:06.467 SYMLINK libspdk_bdev_malloc.so 00:03:06.726 LIB libspdk_bdev_virtio.a 00:03:06.726 LIB libspdk_bdev_lvol.a 00:03:06.726 SO libspdk_bdev_virtio.so.5.0 00:03:06.726 SO libspdk_bdev_lvol.so.5.0 00:03:06.726 SYMLINK libspdk_bdev_lvol.so 00:03:06.726 SYMLINK libspdk_bdev_virtio.so 00:03:06.984 LIB libspdk_bdev_raid.a 00:03:06.984 SO libspdk_bdev_raid.so.5.0 00:03:07.242 SYMLINK libspdk_bdev_raid.so 00:03:08.174 LIB libspdk_bdev_nvme.a 00:03:08.174 SO libspdk_bdev_nvme.so.6.0 00:03:08.433 SYMLINK libspdk_bdev_nvme.so 00:03:08.691 CC module/event/subsystems/iobuf/iobuf.o 00:03:08.691 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:08.691 CC module/event/subsystems/vmd/vmd.o 00:03:08.691 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:08.691 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:08.691 CC module/event/subsystems/scheduler/scheduler.o 00:03:08.691 CC module/event/subsystems/sock/sock.o 00:03:08.691 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:08.691 LIB libspdk_event_sock.a 00:03:08.691 LIB libspdk_event_vfu_tgt.a 00:03:08.691 LIB libspdk_event_vhost_blk.a 00:03:08.691 LIB libspdk_event_vmd.a 00:03:08.691 LIB libspdk_event_scheduler.a 00:03:08.691 SO libspdk_event_sock.so.4.0 00:03:08.691 LIB libspdk_event_iobuf.a 00:03:08.691 SO libspdk_event_vfu_tgt.so.2.0 00:03:08.691 SO libspdk_event_vhost_blk.so.2.0 00:03:08.691 SO libspdk_event_scheduler.so.3.0 00:03:08.691 SO libspdk_event_vmd.so.5.0 00:03:08.691 SO libspdk_event_iobuf.so.2.0 00:03:08.949 SYMLINK libspdk_event_sock.so 00:03:08.949 SYMLINK libspdk_event_vfu_tgt.so 00:03:08.949 SYMLINK libspdk_event_vhost_blk.so 00:03:08.949 SYMLINK libspdk_event_scheduler.so 00:03:08.949 SYMLINK libspdk_event_vmd.so 00:03:08.949 SYMLINK libspdk_event_iobuf.so 00:03:08.949 CC module/event/subsystems/accel/accel.o 00:03:09.206 LIB libspdk_event_accel.a 00:03:09.206 SO libspdk_event_accel.so.5.0 00:03:09.206 SYMLINK libspdk_event_accel.so 00:03:09.206 CC module/event/subsystems/bdev/bdev.o 00:03:09.464 LIB libspdk_event_bdev.a 00:03:09.464 SO libspdk_event_bdev.so.5.0 00:03:09.464 SYMLINK libspdk_event_bdev.so 00:03:09.722 CC module/event/subsystems/scsi/scsi.o 00:03:09.722 CC module/event/subsystems/ublk/ublk.o 00:03:09.722 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:09.722 CC module/event/subsystems/nbd/nbd.o 00:03:09.722 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:09.722 LIB libspdk_event_nbd.a 00:03:09.722 LIB libspdk_event_ublk.a 00:03:09.722 LIB libspdk_event_scsi.a 00:03:09.722 SO libspdk_event_nbd.so.5.0 00:03:09.722 SO libspdk_event_ublk.so.2.0 00:03:09.722 SO libspdk_event_scsi.so.5.0 00:03:09.980 SYMLINK libspdk_event_nbd.so 00:03:09.980 SYMLINK libspdk_event_ublk.so 00:03:09.980 SYMLINK libspdk_event_scsi.so 00:03:09.980 LIB libspdk_event_nvmf.a 00:03:09.980 SO libspdk_event_nvmf.so.5.0 00:03:09.980 SYMLINK libspdk_event_nvmf.so 00:03:09.980 CC module/event/subsystems/iscsi/iscsi.o 00:03:09.980 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:10.239 LIB libspdk_event_vhost_scsi.a 00:03:10.239 LIB libspdk_event_iscsi.a 00:03:10.239 SO libspdk_event_vhost_scsi.so.2.0 00:03:10.239 SO libspdk_event_iscsi.so.5.0 00:03:10.239 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.239 SYMLINK libspdk_event_iscsi.so 00:03:10.239 SO libspdk.so.5.0 00:03:10.239 SYMLINK libspdk.so 00:03:10.510 TEST_HEADER include/spdk/accel.h 00:03:10.510 TEST_HEADER include/spdk/accel_module.h 00:03:10.510 CC app/trace_record/trace_record.o 00:03:10.510 CC app/spdk_nvme_identify/identify.o 00:03:10.510 CC app/spdk_lspci/spdk_lspci.o 00:03:10.510 CXX app/trace/trace.o 00:03:10.510 TEST_HEADER include/spdk/assert.h 00:03:10.510 TEST_HEADER include/spdk/barrier.h 00:03:10.510 CC app/spdk_nvme_discover/discovery_aer.o 00:03:10.510 TEST_HEADER include/spdk/base64.h 00:03:10.510 CC test/rpc_client/rpc_client_test.o 00:03:10.510 CC app/spdk_top/spdk_top.o 00:03:10.510 CC app/spdk_nvme_perf/perf.o 00:03:10.510 TEST_HEADER include/spdk/bdev.h 00:03:10.510 TEST_HEADER include/spdk/bdev_module.h 00:03:10.510 TEST_HEADER include/spdk/bdev_zone.h 00:03:10.510 TEST_HEADER include/spdk/bit_array.h 00:03:10.510 TEST_HEADER include/spdk/bit_pool.h 00:03:10.510 TEST_HEADER include/spdk/blob_bdev.h 00:03:10.510 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:10.510 TEST_HEADER include/spdk/blobfs.h 00:03:10.510 TEST_HEADER include/spdk/blob.h 00:03:10.510 TEST_HEADER include/spdk/conf.h 00:03:10.510 TEST_HEADER include/spdk/config.h 00:03:10.510 TEST_HEADER include/spdk/cpuset.h 00:03:10.510 TEST_HEADER include/spdk/crc16.h 00:03:10.510 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:10.510 TEST_HEADER include/spdk/crc32.h 00:03:10.510 TEST_HEADER include/spdk/crc64.h 00:03:10.510 CC app/spdk_dd/spdk_dd.o 00:03:10.510 TEST_HEADER include/spdk/dif.h 00:03:10.510 TEST_HEADER include/spdk/dma.h 00:03:10.510 TEST_HEADER include/spdk/endian.h 00:03:10.510 CC app/iscsi_tgt/iscsi_tgt.o 00:03:10.510 TEST_HEADER include/spdk/env_dpdk.h 00:03:10.510 CC app/nvmf_tgt/nvmf_main.o 00:03:10.510 TEST_HEADER include/spdk/env.h 00:03:10.510 TEST_HEADER include/spdk/event.h 00:03:10.510 TEST_HEADER include/spdk/fd_group.h 00:03:10.510 CC examples/sock/hello_world/hello_sock.o 00:03:10.510 CC examples/ioat/perf/perf.o 00:03:10.510 CC examples/nvme/reconnect/reconnect.o 00:03:10.510 CC app/vhost/vhost.o 00:03:10.510 CC examples/nvme/arbitration/arbitration.o 00:03:10.510 CC examples/idxd/perf/perf.o 00:03:10.510 CC examples/nvme/hotplug/hotplug.o 00:03:10.510 TEST_HEADER include/spdk/fd.h 00:03:10.510 CC test/app/jsoncat/jsoncat.o 00:03:10.510 CC test/thread/poller_perf/poller_perf.o 00:03:10.510 CC examples/vmd/led/led.o 00:03:10.510 CC app/fio/nvme/fio_plugin.o 00:03:10.510 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:10.510 CC test/app/stub/stub.o 00:03:10.510 TEST_HEADER include/spdk/file.h 00:03:10.510 CC examples/vmd/lsvmd/lsvmd.o 00:03:10.510 CC test/event/event_perf/event_perf.o 00:03:10.510 TEST_HEADER include/spdk/ftl.h 00:03:10.510 CC test/app/histogram_perf/histogram_perf.o 00:03:10.510 CC examples/accel/perf/accel_perf.o 00:03:10.510 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.510 CC examples/util/zipf/zipf.o 00:03:10.510 TEST_HEADER include/spdk/hexlify.h 00:03:10.510 TEST_HEADER include/spdk/histogram_data.h 00:03:10.510 TEST_HEADER include/spdk/idxd.h 00:03:10.510 CC examples/nvme/hello_world/hello_world.o 00:03:10.510 CC test/nvme/aer/aer.o 00:03:10.511 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.511 TEST_HEADER include/spdk/init.h 00:03:10.511 TEST_HEADER include/spdk/ioat.h 00:03:10.511 CC app/spdk_tgt/spdk_tgt.o 00:03:10.511 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.511 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.511 TEST_HEADER include/spdk/json.h 00:03:10.511 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.511 TEST_HEADER include/spdk/likely.h 00:03:10.511 TEST_HEADER include/spdk/log.h 00:03:10.511 CC test/blobfs/mkfs/mkfs.o 00:03:10.511 TEST_HEADER include/spdk/lvol.h 00:03:10.511 TEST_HEADER include/spdk/memory.h 00:03:10.511 CC examples/bdev/hello_world/hello_bdev.o 00:03:10.511 CC test/dma/test_dma/test_dma.o 00:03:10.511 TEST_HEADER include/spdk/mmio.h 00:03:10.511 TEST_HEADER include/spdk/nbd.h 00:03:10.511 CC test/accel/dif/dif.o 00:03:10.511 CC examples/blob/hello_world/hello_blob.o 00:03:10.511 TEST_HEADER include/spdk/notify.h 00:03:10.511 CC examples/blob/cli/blobcli.o 00:03:10.511 CC test/app/bdev_svc/bdev_svc.o 00:03:10.511 CC examples/nvmf/nvmf/nvmf.o 00:03:10.773 TEST_HEADER include/spdk/nvme.h 00:03:10.774 CC examples/bdev/bdevperf/bdevperf.o 00:03:10.774 CC test/bdev/bdevio/bdevio.o 00:03:10.774 CC examples/thread/thread/thread_ex.o 00:03:10.774 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.774 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.774 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.774 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.774 CC test/env/mem_callbacks/mem_callbacks.o 00:03:10.774 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.774 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.774 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.774 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:10.774 TEST_HEADER include/spdk/nvmf.h 00:03:10.774 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.774 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.774 TEST_HEADER include/spdk/opal.h 00:03:10.774 TEST_HEADER include/spdk/opal_spec.h 00:03:10.774 TEST_HEADER include/spdk/pci_ids.h 00:03:10.774 TEST_HEADER include/spdk/pipe.h 00:03:10.774 CC test/lvol/esnap/esnap.o 00:03:10.774 TEST_HEADER include/spdk/queue.h 00:03:10.774 TEST_HEADER include/spdk/reduce.h 00:03:10.774 TEST_HEADER include/spdk/rpc.h 00:03:10.774 TEST_HEADER include/spdk/scheduler.h 00:03:10.774 TEST_HEADER include/spdk/scsi.h 00:03:10.774 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.774 TEST_HEADER include/spdk/sock.h 00:03:10.774 TEST_HEADER include/spdk/stdinc.h 00:03:10.774 TEST_HEADER include/spdk/string.h 00:03:10.774 TEST_HEADER include/spdk/thread.h 00:03:10.774 TEST_HEADER include/spdk/trace.h 00:03:10.774 TEST_HEADER include/spdk/trace_parser.h 00:03:10.774 LINK spdk_lspci 00:03:10.774 TEST_HEADER include/spdk/tree.h 00:03:10.774 TEST_HEADER include/spdk/ublk.h 00:03:10.774 TEST_HEADER include/spdk/util.h 00:03:10.774 TEST_HEADER include/spdk/uuid.h 00:03:10.774 TEST_HEADER include/spdk/version.h 00:03:10.774 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.774 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.774 TEST_HEADER include/spdk/vhost.h 00:03:10.774 TEST_HEADER include/spdk/vmd.h 00:03:10.774 TEST_HEADER include/spdk/xor.h 00:03:10.774 TEST_HEADER include/spdk/zipf.h 00:03:10.774 CXX test/cpp_headers/accel.o 00:03:10.774 LINK lsvmd 00:03:10.774 LINK jsoncat 00:03:10.774 LINK spdk_nvme_discover 00:03:10.774 LINK rpc_client_test 00:03:10.774 LINK led 00:03:10.774 LINK poller_perf 00:03:10.774 LINK event_perf 00:03:10.774 LINK interrupt_tgt 00:03:11.039 LINK histogram_perf 00:03:11.039 LINK zipf 00:03:11.039 LINK nvmf_tgt 00:03:11.039 LINK stub 00:03:11.039 LINK vhost 00:03:11.039 LINK spdk_trace_record 00:03:11.039 LINK iscsi_tgt 00:03:11.039 LINK bdev_svc 00:03:11.039 LINK ioat_perf 00:03:11.039 LINK hello_world 00:03:11.039 LINK mkfs 00:03:11.039 LINK spdk_tgt 00:03:11.039 LINK hello_sock 00:03:11.039 LINK hotplug 00:03:11.039 LINK mem_callbacks 00:03:11.039 LINK hello_blob 00:03:11.039 LINK hello_bdev 00:03:11.039 LINK aer 00:03:11.039 LINK thread 00:03:11.298 CC test/env/vtophys/vtophys.o 00:03:11.298 CXX test/cpp_headers/accel_module.o 00:03:11.298 LINK arbitration 00:03:11.298 LINK nvmf 00:03:11.298 LINK spdk_dd 00:03:11.298 LINK idxd_perf 00:03:11.298 LINK reconnect 00:03:11.298 CC test/nvme/reset/reset.o 00:03:11.298 CC test/event/reactor_perf/reactor_perf.o 00:03:11.298 CXX test/cpp_headers/assert.o 00:03:11.298 CC test/event/reactor/reactor.o 00:03:11.298 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:11.298 CC examples/ioat/verify/verify.o 00:03:11.298 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:11.298 LINK spdk_trace 00:03:11.298 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:11.298 CC examples/nvme/abort/abort.o 00:03:11.298 CXX test/cpp_headers/barrier.o 00:03:11.298 LINK test_dma 00:03:11.298 CXX test/cpp_headers/base64.o 00:03:11.298 CC test/nvme/sgl/sgl.o 00:03:11.298 CC app/fio/bdev/fio_plugin.o 00:03:11.298 CC test/nvme/e2edp/nvme_dp.o 00:03:11.298 LINK bdevio 00:03:11.298 LINK dif 00:03:11.298 CXX test/cpp_headers/bdev.o 00:03:11.587 CXX test/cpp_headers/bdev_module.o 00:03:11.587 CC test/nvme/overhead/overhead.o 00:03:11.587 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.587 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:11.587 LINK accel_perf 00:03:11.587 LINK vtophys 00:03:11.587 CC test/event/app_repeat/app_repeat.o 00:03:11.587 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:11.587 LINK nvme_manage 00:03:11.587 CXX test/cpp_headers/bdev_zone.o 00:03:11.587 CC test/nvme/err_injection/err_injection.o 00:03:11.587 LINK nvme_fuzz 00:03:11.587 CXX test/cpp_headers/bit_array.o 00:03:11.587 LINK reactor_perf 00:03:11.587 CC test/event/scheduler/scheduler.o 00:03:11.587 CXX test/cpp_headers/bit_pool.o 00:03:11.587 CC test/nvme/startup/startup.o 00:03:11.587 LINK spdk_nvme 00:03:11.587 LINK reactor 00:03:11.587 CC test/env/memory/memory_ut.o 00:03:11.587 LINK blobcli 00:03:11.587 CXX test/cpp_headers/blob_bdev.o 00:03:11.587 LINK env_dpdk_post_init 00:03:11.587 CXX test/cpp_headers/blobfs_bdev.o 00:03:11.587 CC test/nvme/reserve/reserve.o 00:03:11.587 CXX test/cpp_headers/blobfs.o 00:03:11.587 CC test/nvme/simple_copy/simple_copy.o 00:03:11.587 LINK cmb_copy 00:03:11.587 CC test/env/pci/pci_ut.o 00:03:11.856 CC test/nvme/connect_stress/connect_stress.o 00:03:11.856 LINK verify 00:03:11.856 LINK reset 00:03:11.856 CXX test/cpp_headers/blob.o 00:03:11.856 CXX test/cpp_headers/conf.o 00:03:11.856 LINK app_repeat 00:03:11.856 CXX test/cpp_headers/config.o 00:03:11.856 CXX test/cpp_headers/cpuset.o 00:03:11.856 CC test/nvme/boot_partition/boot_partition.o 00:03:11.856 CC test/nvme/compliance/nvme_compliance.o 00:03:11.856 LINK pmr_persistence 00:03:11.856 CXX test/cpp_headers/crc16.o 00:03:11.856 CXX test/cpp_headers/crc32.o 00:03:11.856 CXX test/cpp_headers/crc64.o 00:03:11.856 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.856 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:11.856 CXX test/cpp_headers/dif.o 00:03:11.856 CXX test/cpp_headers/dma.o 00:03:11.856 CC test/nvme/fdp/fdp.o 00:03:11.856 CXX test/cpp_headers/endian.o 00:03:11.856 CXX test/cpp_headers/env_dpdk.o 00:03:11.856 LINK err_injection 00:03:11.856 CXX test/cpp_headers/env.o 00:03:11.856 CC test/nvme/cuse/cuse.o 00:03:11.856 CXX test/cpp_headers/event.o 00:03:11.856 LINK sgl 00:03:11.856 LINK nvme_dp 00:03:11.856 CXX test/cpp_headers/fd_group.o 00:03:11.856 LINK startup 00:03:12.117 LINK overhead 00:03:12.117 CXX test/cpp_headers/fd.o 00:03:12.117 LINK spdk_nvme_perf 00:03:12.117 CXX test/cpp_headers/file.o 00:03:12.117 CXX test/cpp_headers/ftl.o 00:03:12.117 CXX test/cpp_headers/gpt_spec.o 00:03:12.117 CXX test/cpp_headers/hexlify.o 00:03:12.117 LINK reserve 00:03:12.117 CXX test/cpp_headers/histogram_data.o 00:03:12.117 LINK abort 00:03:12.117 CXX test/cpp_headers/idxd.o 00:03:12.117 LINK scheduler 00:03:12.117 LINK spdk_nvme_identify 00:03:12.117 CXX test/cpp_headers/idxd_spec.o 00:03:12.117 LINK connect_stress 00:03:12.117 LINK simple_copy 00:03:12.117 CXX test/cpp_headers/init.o 00:03:12.117 CXX test/cpp_headers/ioat.o 00:03:12.117 CXX test/cpp_headers/ioat_spec.o 00:03:12.117 LINK bdevperf 00:03:12.117 CXX test/cpp_headers/iscsi_spec.o 00:03:12.117 LINK boot_partition 00:03:12.117 LINK spdk_top 00:03:12.117 CXX test/cpp_headers/json.o 00:03:12.117 CXX test/cpp_headers/jsonrpc.o 00:03:12.117 CXX test/cpp_headers/likely.o 00:03:12.379 CXX test/cpp_headers/log.o 00:03:12.379 CXX test/cpp_headers/lvol.o 00:03:12.379 CXX test/cpp_headers/memory.o 00:03:12.379 CXX test/cpp_headers/mmio.o 00:03:12.379 LINK doorbell_aers 00:03:12.379 CXX test/cpp_headers/nbd.o 00:03:12.379 CXX test/cpp_headers/notify.o 00:03:12.379 CXX test/cpp_headers/nvme.o 00:03:12.379 CXX test/cpp_headers/nvme_intel.o 00:03:12.379 LINK vhost_fuzz 00:03:12.379 LINK fused_ordering 00:03:12.379 CXX test/cpp_headers/nvme_ocssd.o 00:03:12.379 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:12.379 LINK spdk_bdev 00:03:12.379 CXX test/cpp_headers/nvme_spec.o 00:03:12.379 CXX test/cpp_headers/nvme_zns.o 00:03:12.379 CXX test/cpp_headers/nvmf_cmd.o 00:03:12.379 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:12.379 CXX test/cpp_headers/nvmf.o 00:03:12.379 CXX test/cpp_headers/nvmf_spec.o 00:03:12.379 CXX test/cpp_headers/nvmf_transport.o 00:03:12.379 CXX test/cpp_headers/opal.o 00:03:12.379 CXX test/cpp_headers/opal_spec.o 00:03:12.379 CXX test/cpp_headers/pci_ids.o 00:03:12.379 CXX test/cpp_headers/pipe.o 00:03:12.379 CXX test/cpp_headers/queue.o 00:03:12.379 CXX test/cpp_headers/reduce.o 00:03:12.379 CXX test/cpp_headers/rpc.o 00:03:12.379 LINK pci_ut 00:03:12.379 CXX test/cpp_headers/scheduler.o 00:03:12.379 CXX test/cpp_headers/scsi.o 00:03:12.379 CXX test/cpp_headers/scsi_spec.o 00:03:12.379 CXX test/cpp_headers/sock.o 00:03:12.379 CXX test/cpp_headers/stdinc.o 00:03:12.638 CXX test/cpp_headers/string.o 00:03:12.638 CXX test/cpp_headers/thread.o 00:03:12.638 LINK nvme_compliance 00:03:12.638 CXX test/cpp_headers/trace.o 00:03:12.638 LINK memory_ut 00:03:12.638 CXX test/cpp_headers/trace_parser.o 00:03:12.638 CXX test/cpp_headers/tree.o 00:03:12.638 LINK fdp 00:03:12.638 CXX test/cpp_headers/ublk.o 00:03:12.638 CXX test/cpp_headers/util.o 00:03:12.638 CXX test/cpp_headers/uuid.o 00:03:12.638 CXX test/cpp_headers/version.o 00:03:12.638 CXX test/cpp_headers/vfio_user_pci.o 00:03:12.638 CXX test/cpp_headers/vfio_user_spec.o 00:03:12.638 CXX test/cpp_headers/vhost.o 00:03:12.638 CXX test/cpp_headers/vmd.o 00:03:12.638 CXX test/cpp_headers/xor.o 00:03:12.638 CXX test/cpp_headers/zipf.o 00:03:13.574 LINK cuse 00:03:13.831 LINK iscsi_fuzz 00:03:16.361 LINK esnap 00:03:16.361 00:03:16.361 real 0m37.992s 00:03:16.361 user 7m15.299s 00:03:16.361 sys 1m37.274s 00:03:16.361 19:12:14 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:16.361 19:12:14 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.361 ************************************ 00:03:16.361 END TEST make 00:03:16.361 ************************************ 00:03:16.620 19:12:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:16.620 19:12:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:16.620 19:12:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:16.620 19:12:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:16.620 19:12:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:16.620 19:12:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:16.620 19:12:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:16.620 19:12:14 -- scripts/common.sh@335 -- # IFS=.-: 00:03:16.620 19:12:14 -- scripts/common.sh@335 -- # read -ra ver1 00:03:16.620 19:12:14 -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.620 19:12:14 -- scripts/common.sh@336 -- # read -ra ver2 00:03:16.620 19:12:14 -- scripts/common.sh@337 -- # local 'op=<' 00:03:16.620 19:12:14 -- scripts/common.sh@339 -- # ver1_l=2 00:03:16.620 19:12:14 -- scripts/common.sh@340 -- # ver2_l=1 00:03:16.620 19:12:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:16.620 19:12:14 -- scripts/common.sh@343 -- # case "$op" in 00:03:16.620 19:12:14 -- scripts/common.sh@344 -- # : 1 00:03:16.620 19:12:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:16.620 19:12:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.620 19:12:14 -- scripts/common.sh@364 -- # decimal 1 00:03:16.620 19:12:14 -- scripts/common.sh@352 -- # local d=1 00:03:16.620 19:12:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.620 19:12:14 -- scripts/common.sh@354 -- # echo 1 00:03:16.620 19:12:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:16.620 19:12:14 -- scripts/common.sh@365 -- # decimal 2 00:03:16.620 19:12:14 -- scripts/common.sh@352 -- # local d=2 00:03:16.620 19:12:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.620 19:12:14 -- scripts/common.sh@354 -- # echo 2 00:03:16.620 19:12:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:16.620 19:12:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:16.620 19:12:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:16.620 19:12:14 -- scripts/common.sh@367 -- # return 0 00:03:16.620 19:12:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.620 19:12:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.620 --rc genhtml_branch_coverage=1 00:03:16.620 --rc genhtml_function_coverage=1 00:03:16.620 --rc genhtml_legend=1 00:03:16.620 --rc geninfo_all_blocks=1 00:03:16.620 --rc geninfo_unexecuted_blocks=1 00:03:16.620 00:03:16.620 ' 00:03:16.620 19:12:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.620 --rc genhtml_branch_coverage=1 00:03:16.620 --rc genhtml_function_coverage=1 00:03:16.620 --rc genhtml_legend=1 00:03:16.620 --rc geninfo_all_blocks=1 00:03:16.620 --rc geninfo_unexecuted_blocks=1 00:03:16.620 00:03:16.620 ' 00:03:16.620 19:12:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.620 --rc genhtml_branch_coverage=1 00:03:16.620 --rc genhtml_function_coverage=1 00:03:16.620 --rc genhtml_legend=1 00:03:16.620 --rc geninfo_all_blocks=1 00:03:16.620 --rc geninfo_unexecuted_blocks=1 00:03:16.620 00:03:16.620 ' 00:03:16.620 19:12:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:16.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.620 --rc genhtml_branch_coverage=1 00:03:16.620 --rc genhtml_function_coverage=1 00:03:16.620 --rc genhtml_legend=1 00:03:16.620 --rc geninfo_all_blocks=1 00:03:16.620 --rc geninfo_unexecuted_blocks=1 00:03:16.620 00:03:16.620 ' 00:03:16.620 19:12:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:16.620 19:12:14 -- nvmf/common.sh@7 -- # uname -s 00:03:16.620 19:12:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.620 19:12:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.620 19:12:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.620 19:12:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.620 19:12:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.620 19:12:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.620 19:12:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.620 19:12:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.620 19:12:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.620 19:12:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.620 19:12:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:16.620 19:12:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:16.620 19:12:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.620 19:12:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.620 19:12:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:16.620 19:12:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:16.620 19:12:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.620 19:12:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.620 19:12:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.621 19:12:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.621 19:12:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.621 19:12:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.621 19:12:14 -- paths/export.sh@5 -- # export PATH 00:03:16.621 19:12:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.621 19:12:14 -- nvmf/common.sh@46 -- # : 0 00:03:16.621 19:12:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:16.621 19:12:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:16.621 19:12:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:16.621 19:12:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.621 19:12:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.621 19:12:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:16.621 19:12:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:16.621 19:12:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:16.621 19:12:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.621 19:12:14 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.621 19:12:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.621 19:12:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.621 19:12:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:16.621 19:12:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.621 19:12:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:16.621 19:12:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.621 19:12:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.621 19:12:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.621 19:12:14 -- spdk/autotest.sh@48 -- # udevadm_pid=1042873 00:03:16.621 19:12:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.621 19:12:14 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:16.621 19:12:14 -- spdk/autotest.sh@54 -- # echo 1042875 00:03:16.621 19:12:14 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:16.621 19:12:14 -- spdk/autotest.sh@56 -- # echo 1042876 00:03:16.621 19:12:14 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:16.621 19:12:14 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:16.621 19:12:14 -- spdk/autotest.sh@60 -- # echo 1042877 00:03:16.621 19:12:14 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:16.621 19:12:14 -- spdk/autotest.sh@62 -- # echo 1042878 00:03:16.621 19:12:14 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:16.621 19:12:14 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:16.621 19:12:14 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:16.621 19:12:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:16.621 19:12:14 -- common/autotest_common.sh@10 -- # set +x 00:03:16.621 19:12:14 -- spdk/autotest.sh@70 -- # create_test_list 00:03:16.621 19:12:14 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:16.621 19:12:14 -- common/autotest_common.sh@10 -- # set +x 00:03:16.621 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:16.621 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:16.621 19:12:14 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:16.621 19:12:14 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.621 19:12:14 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.621 19:12:14 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:16.621 19:12:14 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.621 19:12:14 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:16.621 19:12:14 -- common/autotest_common.sh@1450 -- # uname 00:03:16.621 19:12:14 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:16.621 19:12:14 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:16.621 19:12:14 -- common/autotest_common.sh@1470 -- # uname 00:03:16.621 19:12:14 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:16.621 19:12:14 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:16.621 19:12:14 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:16.621 lcov: LCOV version 1.15 00:03:16.621 19:12:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:20.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:52.908 19:12:48 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:52.908 19:12:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:52.908 19:12:48 -- common/autotest_common.sh@10 -- # set +x 00:03:52.908 19:12:48 -- spdk/autotest.sh@89 -- # rm -f 00:03:52.908 19:12:48 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.908 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:52.908 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:52.908 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:52.908 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:52.908 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:52.908 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:52.908 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:52.908 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:52.908 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:52.908 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:52.908 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:52.908 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:52.908 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:52.908 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:52.908 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:52.908 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:52.908 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:52.908 19:12:49 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:52.908 19:12:49 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:52.908 19:12:49 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:52.908 19:12:49 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:52.908 19:12:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:52.908 19:12:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:52.908 19:12:49 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:52.908 19:12:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.908 19:12:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:52.908 19:12:49 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:52.908 19:12:49 -- spdk/autotest.sh@108 -- # grep -v p 00:03:52.908 19:12:49 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:03:52.908 19:12:49 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:52.908 19:12:49 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:52.908 19:12:49 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:52.908 19:12:49 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:52.908 19:12:49 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:52.908 No valid GPT data, bailing 00:03:52.908 19:12:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.908 19:12:49 -- scripts/common.sh@393 -- # pt= 00:03:52.908 19:12:49 -- scripts/common.sh@394 -- # return 1 00:03:52.908 19:12:49 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:52.908 1+0 records in 00:03:52.908 1+0 records out 00:03:52.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00229299 s, 457 MB/s 00:03:52.908 19:12:49 -- spdk/autotest.sh@116 -- # sync 00:03:52.908 19:12:49 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.908 19:12:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.908 19:12:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:53.843 19:12:51 -- spdk/autotest.sh@122 -- # uname -s 00:03:53.843 19:12:51 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:53.843 19:12:51 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:53.843 19:12:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.843 19:12:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.843 19:12:51 -- common/autotest_common.sh@10 -- # set +x 00:03:53.843 ************************************ 00:03:53.843 START TEST setup.sh 00:03:53.843 ************************************ 00:03:53.843 19:12:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:53.843 * Looking for test storage... 00:03:53.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:53.843 19:12:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:53.843 19:12:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:53.843 19:12:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:53.843 19:12:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:53.843 19:12:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:53.843 19:12:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:53.843 19:12:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:53.843 19:12:51 -- scripts/common.sh@335 -- # IFS=.-: 00:03:53.843 19:12:51 -- scripts/common.sh@335 -- # read -ra ver1 00:03:53.843 19:12:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.843 19:12:51 -- scripts/common.sh@336 -- # read -ra ver2 00:03:53.843 19:12:51 -- scripts/common.sh@337 -- # local 'op=<' 00:03:53.843 19:12:51 -- scripts/common.sh@339 -- # ver1_l=2 00:03:53.843 19:12:51 -- scripts/common.sh@340 -- # ver2_l=1 00:03:53.843 19:12:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:53.843 19:12:51 -- scripts/common.sh@343 -- # case "$op" in 00:03:53.843 19:12:51 -- scripts/common.sh@344 -- # : 1 00:03:53.843 19:12:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:53.843 19:12:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.843 19:12:51 -- scripts/common.sh@364 -- # decimal 1 00:03:53.843 19:12:51 -- scripts/common.sh@352 -- # local d=1 00:03:53.843 19:12:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.843 19:12:51 -- scripts/common.sh@354 -- # echo 1 00:03:53.843 19:12:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:53.843 19:12:51 -- scripts/common.sh@365 -- # decimal 2 00:03:53.843 19:12:51 -- scripts/common.sh@352 -- # local d=2 00:03:53.843 19:12:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.843 19:12:51 -- scripts/common.sh@354 -- # echo 2 00:03:53.843 19:12:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:53.843 19:12:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:53.843 19:12:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:53.843 19:12:51 -- scripts/common.sh@367 -- # return 0 00:03:53.843 19:12:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.843 19:12:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:53.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.843 --rc genhtml_branch_coverage=1 00:03:53.843 --rc genhtml_function_coverage=1 00:03:53.843 --rc genhtml_legend=1 00:03:53.843 --rc geninfo_all_blocks=1 00:03:53.843 --rc geninfo_unexecuted_blocks=1 00:03:53.843 00:03:53.843 ' 00:03:53.843 19:12:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:53.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.843 --rc genhtml_branch_coverage=1 00:03:53.843 --rc genhtml_function_coverage=1 00:03:53.843 --rc genhtml_legend=1 00:03:53.843 --rc geninfo_all_blocks=1 00:03:53.843 --rc geninfo_unexecuted_blocks=1 00:03:53.843 00:03:53.843 ' 00:03:53.843 19:12:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:53.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.843 --rc genhtml_branch_coverage=1 00:03:53.843 --rc genhtml_function_coverage=1 00:03:53.843 --rc genhtml_legend=1 00:03:53.843 --rc geninfo_all_blocks=1 00:03:53.843 --rc geninfo_unexecuted_blocks=1 00:03:53.843 00:03:53.843 ' 00:03:53.843 19:12:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:53.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.843 --rc genhtml_branch_coverage=1 00:03:53.843 --rc genhtml_function_coverage=1 00:03:53.843 --rc genhtml_legend=1 00:03:53.843 --rc geninfo_all_blocks=1 00:03:53.843 --rc geninfo_unexecuted_blocks=1 00:03:53.843 00:03:53.843 ' 00:03:53.843 19:12:51 -- setup/test-setup.sh@10 -- # uname -s 00:03:53.843 19:12:51 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:53.843 19:12:51 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:53.843 19:12:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.843 19:12:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.843 19:12:51 -- common/autotest_common.sh@10 -- # set +x 00:03:53.843 ************************************ 00:03:53.843 START TEST acl 00:03:53.843 ************************************ 00:03:53.843 19:12:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:53.843 * Looking for test storage... 00:03:53.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:53.843 19:12:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:53.843 19:12:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:53.843 19:12:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:54.101 19:12:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:54.101 19:12:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:54.101 19:12:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:54.101 19:12:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:54.101 19:12:52 -- scripts/common.sh@335 -- # IFS=.-: 00:03:54.101 19:12:52 -- scripts/common.sh@335 -- # read -ra ver1 00:03:54.101 19:12:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.101 19:12:52 -- scripts/common.sh@336 -- # read -ra ver2 00:03:54.101 19:12:52 -- scripts/common.sh@337 -- # local 'op=<' 00:03:54.101 19:12:52 -- scripts/common.sh@339 -- # ver1_l=2 00:03:54.101 19:12:52 -- scripts/common.sh@340 -- # ver2_l=1 00:03:54.101 19:12:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:54.101 19:12:52 -- scripts/common.sh@343 -- # case "$op" in 00:03:54.101 19:12:52 -- scripts/common.sh@344 -- # : 1 00:03:54.101 19:12:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:54.101 19:12:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.101 19:12:52 -- scripts/common.sh@364 -- # decimal 1 00:03:54.102 19:12:52 -- scripts/common.sh@352 -- # local d=1 00:03:54.102 19:12:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.102 19:12:52 -- scripts/common.sh@354 -- # echo 1 00:03:54.102 19:12:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:54.102 19:12:52 -- scripts/common.sh@365 -- # decimal 2 00:03:54.102 19:12:52 -- scripts/common.sh@352 -- # local d=2 00:03:54.102 19:12:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.102 19:12:52 -- scripts/common.sh@354 -- # echo 2 00:03:54.102 19:12:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:54.102 19:12:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:54.102 19:12:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:54.102 19:12:52 -- scripts/common.sh@367 -- # return 0 00:03:54.102 19:12:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.102 19:12:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.102 --rc genhtml_branch_coverage=1 00:03:54.102 --rc genhtml_function_coverage=1 00:03:54.102 --rc genhtml_legend=1 00:03:54.102 --rc geninfo_all_blocks=1 00:03:54.102 --rc geninfo_unexecuted_blocks=1 00:03:54.102 00:03:54.102 ' 00:03:54.102 19:12:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.102 --rc genhtml_branch_coverage=1 00:03:54.102 --rc genhtml_function_coverage=1 00:03:54.102 --rc genhtml_legend=1 00:03:54.102 --rc geninfo_all_blocks=1 00:03:54.102 --rc geninfo_unexecuted_blocks=1 00:03:54.102 00:03:54.102 ' 00:03:54.102 19:12:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.102 --rc genhtml_branch_coverage=1 00:03:54.102 --rc genhtml_function_coverage=1 00:03:54.102 --rc genhtml_legend=1 00:03:54.102 --rc geninfo_all_blocks=1 00:03:54.102 --rc geninfo_unexecuted_blocks=1 00:03:54.102 00:03:54.102 ' 00:03:54.102 19:12:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.102 --rc genhtml_branch_coverage=1 00:03:54.102 --rc genhtml_function_coverage=1 00:03:54.102 --rc genhtml_legend=1 00:03:54.102 --rc geninfo_all_blocks=1 00:03:54.102 --rc geninfo_unexecuted_blocks=1 00:03:54.102 00:03:54.102 ' 00:03:54.102 19:12:52 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:54.102 19:12:52 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:54.102 19:12:52 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:54.102 19:12:52 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:54.102 19:12:52 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:54.102 19:12:52 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:54.102 19:12:52 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:54.102 19:12:52 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.102 19:12:52 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:54.102 19:12:52 -- setup/acl.sh@12 -- # devs=() 00:03:54.102 19:12:52 -- setup/acl.sh@12 -- # declare -a devs 00:03:54.102 19:12:52 -- setup/acl.sh@13 -- # drivers=() 00:03:54.102 19:12:52 -- setup/acl.sh@13 -- # declare -A drivers 00:03:54.102 19:12:52 -- setup/acl.sh@51 -- # setup reset 00:03:54.102 19:12:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.102 19:12:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.477 19:12:53 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:55.477 19:12:53 -- setup/acl.sh@16 -- # local dev driver 00:03:55.477 19:12:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.477 19:12:53 -- setup/acl.sh@15 -- # setup output status 00:03:55.477 19:12:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.477 19:12:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:56.851 Hugepages 00:03:56.851 node hugesize free / total 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # continue 00:03:56.851 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # continue 00:03:56.851 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # continue 00:03:56.851 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.851 00:03:56.851 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # continue 00:03:56.851 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:56.851 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.851 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.851 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.851 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:56.851 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.851 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # continue 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:56.852 19:12:54 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:56.852 19:12:54 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:56.852 19:12:54 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:56.852 19:12:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:56.852 19:12:54 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:56.852 19:12:54 -- setup/acl.sh@54 -- # run_test denied denied 00:03:56.852 19:12:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.852 19:12:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.852 19:12:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.852 ************************************ 00:03:56.852 START TEST denied 00:03:56.852 ************************************ 00:03:56.852 19:12:54 -- common/autotest_common.sh@1114 -- # denied 00:03:56.852 19:12:54 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:56.852 19:12:54 -- setup/acl.sh@38 -- # setup output config 00:03:56.852 19:12:54 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:56.852 19:12:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.852 19:12:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.231 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:58.231 19:12:56 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:58.231 19:12:56 -- setup/acl.sh@28 -- # local dev driver 00:03:58.231 19:12:56 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:58.231 19:12:56 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:58.231 19:12:56 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:58.231 19:12:56 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:58.231 19:12:56 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:58.232 19:12:56 -- setup/acl.sh@41 -- # setup reset 00:03:58.232 19:12:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.232 19:12:56 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.824 00:04:00.824 real 0m4.143s 00:04:00.824 user 0m1.173s 00:04:00.824 sys 0m2.021s 00:04:00.824 19:12:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.824 19:12:58 -- common/autotest_common.sh@10 -- # set +x 00:04:00.824 ************************************ 00:04:00.824 END TEST denied 00:04:00.824 ************************************ 00:04:00.824 19:12:59 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:00.824 19:12:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.824 19:12:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.824 19:12:59 -- common/autotest_common.sh@10 -- # set +x 00:04:00.824 ************************************ 00:04:00.824 START TEST allowed 00:04:00.824 ************************************ 00:04:00.824 19:12:59 -- common/autotest_common.sh@1114 -- # allowed 00:04:00.824 19:12:59 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:00.824 19:12:59 -- setup/acl.sh@45 -- # setup output config 00:04:00.824 19:12:59 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:00.824 19:12:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.824 19:12:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.359 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.359 19:13:01 -- setup/acl.sh@47 -- # verify 00:04:03.359 19:13:01 -- setup/acl.sh@28 -- # local dev driver 00:04:03.359 19:13:01 -- setup/acl.sh@48 -- # setup reset 00:04:03.359 19:13:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.359 19:13:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.264 00:04:05.264 real 0m4.023s 00:04:05.264 user 0m1.104s 00:04:05.264 sys 0m1.812s 00:04:05.264 19:13:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.264 19:13:03 -- common/autotest_common.sh@10 -- # set +x 00:04:05.264 ************************************ 00:04:05.264 END TEST allowed 00:04:05.264 ************************************ 00:04:05.264 00:04:05.264 real 0m11.061s 00:04:05.264 user 0m3.442s 00:04:05.264 sys 0m5.663s 00:04:05.264 19:13:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.264 19:13:03 -- common/autotest_common.sh@10 -- # set +x 00:04:05.264 ************************************ 00:04:05.264 END TEST acl 00:04:05.264 ************************************ 00:04:05.264 19:13:03 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:05.264 19:13:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.264 19:13:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.264 19:13:03 -- common/autotest_common.sh@10 -- # set +x 00:04:05.264 ************************************ 00:04:05.264 START TEST hugepages 00:04:05.264 ************************************ 00:04:05.264 19:13:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:05.264 * Looking for test storage... 00:04:05.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:05.264 19:13:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:05.264 19:13:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:05.264 19:13:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:05.264 19:13:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:05.264 19:13:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:05.264 19:13:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:05.264 19:13:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:05.264 19:13:03 -- scripts/common.sh@335 -- # IFS=.-: 00:04:05.264 19:13:03 -- scripts/common.sh@335 -- # read -ra ver1 00:04:05.264 19:13:03 -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.264 19:13:03 -- scripts/common.sh@336 -- # read -ra ver2 00:04:05.264 19:13:03 -- scripts/common.sh@337 -- # local 'op=<' 00:04:05.264 19:13:03 -- scripts/common.sh@339 -- # ver1_l=2 00:04:05.264 19:13:03 -- scripts/common.sh@340 -- # ver2_l=1 00:04:05.264 19:13:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:05.264 19:13:03 -- scripts/common.sh@343 -- # case "$op" in 00:04:05.264 19:13:03 -- scripts/common.sh@344 -- # : 1 00:04:05.264 19:13:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:05.264 19:13:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.264 19:13:03 -- scripts/common.sh@364 -- # decimal 1 00:04:05.264 19:13:03 -- scripts/common.sh@352 -- # local d=1 00:04:05.264 19:13:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.264 19:13:03 -- scripts/common.sh@354 -- # echo 1 00:04:05.264 19:13:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:05.264 19:13:03 -- scripts/common.sh@365 -- # decimal 2 00:04:05.264 19:13:03 -- scripts/common.sh@352 -- # local d=2 00:04:05.264 19:13:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.264 19:13:03 -- scripts/common.sh@354 -- # echo 2 00:04:05.264 19:13:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:05.264 19:13:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:05.264 19:13:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:05.264 19:13:03 -- scripts/common.sh@367 -- # return 0 00:04:05.265 19:13:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.265 19:13:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:05.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.265 --rc genhtml_branch_coverage=1 00:04:05.265 --rc genhtml_function_coverage=1 00:04:05.265 --rc genhtml_legend=1 00:04:05.265 --rc geninfo_all_blocks=1 00:04:05.265 --rc geninfo_unexecuted_blocks=1 00:04:05.265 00:04:05.265 ' 00:04:05.265 19:13:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:05.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.265 --rc genhtml_branch_coverage=1 00:04:05.265 --rc genhtml_function_coverage=1 00:04:05.265 --rc genhtml_legend=1 00:04:05.265 --rc geninfo_all_blocks=1 00:04:05.265 --rc geninfo_unexecuted_blocks=1 00:04:05.265 00:04:05.265 ' 00:04:05.265 19:13:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:05.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.265 --rc genhtml_branch_coverage=1 00:04:05.265 --rc genhtml_function_coverage=1 00:04:05.265 --rc genhtml_legend=1 00:04:05.265 --rc geninfo_all_blocks=1 00:04:05.265 --rc geninfo_unexecuted_blocks=1 00:04:05.265 00:04:05.265 ' 00:04:05.265 19:13:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:05.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.265 --rc genhtml_branch_coverage=1 00:04:05.265 --rc genhtml_function_coverage=1 00:04:05.265 --rc genhtml_legend=1 00:04:05.265 --rc geninfo_all_blocks=1 00:04:05.265 --rc geninfo_unexecuted_blocks=1 00:04:05.265 00:04:05.265 ' 00:04:05.265 19:13:03 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:05.265 19:13:03 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:05.265 19:13:03 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:05.265 19:13:03 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:05.265 19:13:03 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:05.265 19:13:03 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:05.265 19:13:03 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:05.265 19:13:03 -- setup/common.sh@18 -- # local node= 00:04:05.265 19:13:03 -- setup/common.sh@19 -- # local var val 00:04:05.265 19:13:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.265 19:13:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.265 19:13:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.265 19:13:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.265 19:13:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.265 19:13:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 41962700 kB' 'MemAvailable: 45710872 kB' 'Buffers: 2708 kB' 'Cached: 11938660 kB' 'SwapCached: 0 kB' 'Active: 8636960 kB' 'Inactive: 3752604 kB' 'Active(anon): 8244828 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451620 kB' 'Mapped: 165352 kB' 'Shmem: 7796632 kB' 'KReclaimable: 194032 kB' 'Slab: 669944 kB' 'SReclaimable: 194032 kB' 'SUnreclaim: 475912 kB' 'KernelStack: 12848 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36559288 kB' 'Committed_AS: 9310820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198296 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.265 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.265 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.266 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.266 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.267 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.267 19:13:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.267 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.267 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.267 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.267 19:13:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.267 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.267 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.267 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.267 19:13:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.267 19:13:03 -- setup/common.sh@32 -- # continue 00:04:05.267 19:13:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.267 19:13:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.267 19:13:03 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.267 19:13:03 -- setup/common.sh@33 -- # echo 2048 00:04:05.267 19:13:03 -- setup/common.sh@33 -- # return 0 00:04:05.267 19:13:03 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:05.267 19:13:03 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:05.267 19:13:03 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:05.267 19:13:03 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:05.267 19:13:03 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:05.267 19:13:03 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:05.267 19:13:03 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:05.267 19:13:03 -- setup/hugepages.sh@207 -- # get_nodes 00:04:05.267 19:13:03 -- setup/hugepages.sh@27 -- # local node 00:04:05.267 19:13:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.267 19:13:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:05.267 19:13:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.267 19:13:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.267 19:13:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.267 19:13:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.267 19:13:03 -- setup/hugepages.sh@208 -- # clear_hp 00:04:05.267 19:13:03 -- setup/hugepages.sh@37 -- # local node hp 00:04:05.267 19:13:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.267 19:13:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.267 19:13:03 -- setup/hugepages.sh@41 -- # echo 0 00:04:05.267 19:13:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.267 19:13:03 -- setup/hugepages.sh@41 -- # echo 0 00:04:05.267 19:13:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.267 19:13:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.267 19:13:03 -- setup/hugepages.sh@41 -- # echo 0 00:04:05.267 19:13:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.267 19:13:03 -- setup/hugepages.sh@41 -- # echo 0 00:04:05.267 19:13:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.267 19:13:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.267 19:13:03 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:05.267 19:13:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.267 19:13:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.267 19:13:03 -- common/autotest_common.sh@10 -- # set +x 00:04:05.267 ************************************ 00:04:05.267 START TEST default_setup 00:04:05.267 ************************************ 00:04:05.267 19:13:03 -- common/autotest_common.sh@1114 -- # default_setup 00:04:05.267 19:13:03 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:05.267 19:13:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.267 19:13:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.267 19:13:03 -- setup/hugepages.sh@51 -- # shift 00:04:05.267 19:13:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:05.267 19:13:03 -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.267 19:13:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.267 19:13:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.267 19:13:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.267 19:13:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:05.267 19:13:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.267 19:13:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.267 19:13:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.267 19:13:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.267 19:13:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.267 19:13:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.267 19:13:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.267 19:13:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:05.267 19:13:03 -- setup/hugepages.sh@73 -- # return 0 00:04:05.267 19:13:03 -- setup/hugepages.sh@137 -- # setup output 00:04:05.267 19:13:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.267 19:13:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.647 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:06.647 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:06.647 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:06.647 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:06.647 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:06.647 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:06.647 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:06.647 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:06.647 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:06.647 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:06.647 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:06.647 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:06.647 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:06.647 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:06.647 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:06.647 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:07.592 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:07.592 19:13:05 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:07.592 19:13:05 -- setup/hugepages.sh@89 -- # local node 00:04:07.592 19:13:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.592 19:13:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.592 19:13:05 -- setup/hugepages.sh@92 -- # local surp 00:04:07.592 19:13:05 -- setup/hugepages.sh@93 -- # local resv 00:04:07.592 19:13:05 -- setup/hugepages.sh@94 -- # local anon 00:04:07.592 19:13:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.592 19:13:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.592 19:13:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.592 19:13:05 -- setup/common.sh@18 -- # local node= 00:04:07.592 19:13:05 -- setup/common.sh@19 -- # local var val 00:04:07.592 19:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.592 19:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.592 19:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.592 19:13:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.592 19:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.592 19:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44098076 kB' 'MemAvailable: 47846252 kB' 'Buffers: 2708 kB' 'Cached: 11938772 kB' 'SwapCached: 0 kB' 'Active: 8639452 kB' 'Inactive: 3752604 kB' 'Active(anon): 8247320 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453792 kB' 'Mapped: 165416 kB' 'Shmem: 7796744 kB' 'KReclaimable: 194040 kB' 'Slab: 669456 kB' 'SReclaimable: 194040 kB' 'SUnreclaim: 475416 kB' 'KernelStack: 12704 kB' 'PageTables: 7396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9313436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198440 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.593 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.593 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.594 19:13:05 -- setup/common.sh@33 -- # echo 0 00:04:07.594 19:13:05 -- setup/common.sh@33 -- # return 0 00:04:07.594 19:13:05 -- setup/hugepages.sh@97 -- # anon=0 00:04:07.594 19:13:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.594 19:13:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.594 19:13:05 -- setup/common.sh@18 -- # local node= 00:04:07.594 19:13:05 -- setup/common.sh@19 -- # local var val 00:04:07.594 19:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.594 19:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.594 19:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.594 19:13:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.594 19:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.594 19:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44097824 kB' 'MemAvailable: 47846000 kB' 'Buffers: 2708 kB' 'Cached: 11938772 kB' 'SwapCached: 0 kB' 'Active: 8639564 kB' 'Inactive: 3752604 kB' 'Active(anon): 8247432 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453876 kB' 'Mapped: 165424 kB' 'Shmem: 7796744 kB' 'KReclaimable: 194040 kB' 'Slab: 669428 kB' 'SReclaimable: 194040 kB' 'SUnreclaim: 475388 kB' 'KernelStack: 12784 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9313448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198392 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.594 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.594 19:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.595 19:13:05 -- setup/common.sh@33 -- # echo 0 00:04:07.595 19:13:05 -- setup/common.sh@33 -- # return 0 00:04:07.595 19:13:05 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.595 19:13:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.595 19:13:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.595 19:13:05 -- setup/common.sh@18 -- # local node= 00:04:07.595 19:13:05 -- setup/common.sh@19 -- # local var val 00:04:07.595 19:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.595 19:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.595 19:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.595 19:13:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.595 19:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.595 19:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44098600 kB' 'MemAvailable: 47846776 kB' 'Buffers: 2708 kB' 'Cached: 11938784 kB' 'SwapCached: 0 kB' 'Active: 8639140 kB' 'Inactive: 3752604 kB' 'Active(anon): 8247008 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453488 kB' 'Mapped: 165368 kB' 'Shmem: 7796756 kB' 'KReclaimable: 194040 kB' 'Slab: 669492 kB' 'SReclaimable: 194040 kB' 'SUnreclaim: 475452 kB' 'KernelStack: 12912 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9313464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198392 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.595 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.595 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.596 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.596 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.597 19:13:05 -- setup/common.sh@33 -- # echo 0 00:04:07.597 19:13:05 -- setup/common.sh@33 -- # return 0 00:04:07.597 19:13:05 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.597 19:13:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.597 nr_hugepages=1024 00:04:07.597 19:13:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.597 resv_hugepages=0 00:04:07.597 19:13:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.597 surplus_hugepages=0 00:04:07.597 19:13:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.597 anon_hugepages=0 00:04:07.597 19:13:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.597 19:13:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.597 19:13:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.597 19:13:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.597 19:13:05 -- setup/common.sh@18 -- # local node= 00:04:07.597 19:13:05 -- setup/common.sh@19 -- # local var val 00:04:07.597 19:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.597 19:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.597 19:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.597 19:13:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.597 19:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.597 19:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44108588 kB' 'MemAvailable: 47856764 kB' 'Buffers: 2708 kB' 'Cached: 11938804 kB' 'SwapCached: 0 kB' 'Active: 8638720 kB' 'Inactive: 3752604 kB' 'Active(anon): 8246588 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453056 kB' 'Mapped: 165368 kB' 'Shmem: 7796776 kB' 'KReclaimable: 194040 kB' 'Slab: 669444 kB' 'SReclaimable: 194040 kB' 'SUnreclaim: 475404 kB' 'KernelStack: 12880 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9313480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198392 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.597 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.597 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.598 19:13:05 -- setup/common.sh@33 -- # echo 1024 00:04:07.598 19:13:05 -- setup/common.sh@33 -- # return 0 00:04:07.598 19:13:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.598 19:13:05 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.598 19:13:05 -- setup/hugepages.sh@27 -- # local node 00:04:07.598 19:13:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.598 19:13:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.598 19:13:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.598 19:13:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.598 19:13:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.598 19:13:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.598 19:13:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.598 19:13:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.598 19:13:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.598 19:13:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.598 19:13:05 -- setup/common.sh@18 -- # local node=0 00:04:07.598 19:13:05 -- setup/common.sh@19 -- # local var val 00:04:07.598 19:13:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.598 19:13:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.598 19:13:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.598 19:13:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.598 19:13:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.598 19:13:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825856 kB' 'MemFree: 20032516 kB' 'MemUsed: 12793340 kB' 'SwapCached: 0 kB' 'Active: 5787200 kB' 'Inactive: 3677028 kB' 'Active(anon): 5569536 kB' 'Inactive(anon): 0 kB' 'Active(file): 217664 kB' 'Inactive(file): 3677028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9339544 kB' 'Mapped: 81704 kB' 'AnonPages: 127824 kB' 'Shmem: 5444852 kB' 'KernelStack: 7432 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96296 kB' 'Slab: 381824 kB' 'SReclaimable: 96296 kB' 'SUnreclaim: 285528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.598 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.598 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.857 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.857 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # continue 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.858 19:13:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.858 19:13:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.858 19:13:05 -- setup/common.sh@33 -- # echo 0 00:04:07.858 19:13:05 -- setup/common.sh@33 -- # return 0 00:04:07.858 19:13:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.858 19:13:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.858 19:13:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.858 19:13:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.858 19:13:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.858 node0=1024 expecting 1024 00:04:07.858 19:13:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.858 00:04:07.858 real 0m2.595s 00:04:07.858 user 0m0.717s 00:04:07.858 sys 0m1.006s 00:04:07.858 19:13:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.858 19:13:05 -- common/autotest_common.sh@10 -- # set +x 00:04:07.858 ************************************ 00:04:07.858 END TEST default_setup 00:04:07.858 ************************************ 00:04:07.858 19:13:05 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:07.858 19:13:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.858 19:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.858 19:13:05 -- common/autotest_common.sh@10 -- # set +x 00:04:07.858 ************************************ 00:04:07.858 START TEST per_node_1G_alloc 00:04:07.858 ************************************ 00:04:07.858 19:13:05 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:07.858 19:13:05 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:07.858 19:13:05 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:07.858 19:13:05 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:07.858 19:13:05 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:07.858 19:13:05 -- setup/hugepages.sh@51 -- # shift 00:04:07.858 19:13:05 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:07.858 19:13:05 -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.858 19:13:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.858 19:13:05 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:07.858 19:13:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:07.858 19:13:05 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:07.858 19:13:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.858 19:13:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:07.858 19:13:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.858 19:13:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.858 19:13:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.858 19:13:05 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:07.858 19:13:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.858 19:13:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:07.858 19:13:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.858 19:13:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:07.858 19:13:05 -- setup/hugepages.sh@73 -- # return 0 00:04:07.858 19:13:05 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:07.858 19:13:05 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:07.858 19:13:05 -- setup/hugepages.sh@146 -- # setup output 00:04:07.858 19:13:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.858 19:13:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.240 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.240 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.240 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.240 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.240 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.240 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.240 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.240 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.240 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.240 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.240 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.240 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.240 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.240 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.240 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.240 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.240 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.240 19:13:07 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:09.240 19:13:07 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:09.240 19:13:07 -- setup/hugepages.sh@89 -- # local node 00:04:09.240 19:13:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.240 19:13:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.240 19:13:07 -- setup/hugepages.sh@92 -- # local surp 00:04:09.240 19:13:07 -- setup/hugepages.sh@93 -- # local resv 00:04:09.240 19:13:07 -- setup/hugepages.sh@94 -- # local anon 00:04:09.240 19:13:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.240 19:13:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.240 19:13:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.240 19:13:07 -- setup/common.sh@18 -- # local node= 00:04:09.240 19:13:07 -- setup/common.sh@19 -- # local var val 00:04:09.240 19:13:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.240 19:13:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.240 19:13:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.240 19:13:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.240 19:13:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.240 19:13:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44109444 kB' 'MemAvailable: 47857620 kB' 'Buffers: 2708 kB' 'Cached: 11938856 kB' 'SwapCached: 0 kB' 'Active: 8640456 kB' 'Inactive: 3752604 kB' 'Active(anon): 8248324 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454744 kB' 'Mapped: 165888 kB' 'Shmem: 7796828 kB' 'KReclaimable: 194040 kB' 'Slab: 669428 kB' 'SReclaimable: 194040 kB' 'SUnreclaim: 475388 kB' 'KernelStack: 12848 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9315536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198440 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.240 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.240 19:13:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.241 19:13:07 -- setup/common.sh@33 -- # echo 0 00:04:09.241 19:13:07 -- setup/common.sh@33 -- # return 0 00:04:09.241 19:13:07 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.241 19:13:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.241 19:13:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.241 19:13:07 -- setup/common.sh@18 -- # local node= 00:04:09.241 19:13:07 -- setup/common.sh@19 -- # local var val 00:04:09.241 19:13:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.241 19:13:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.241 19:13:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.241 19:13:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.241 19:13:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.241 19:13:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44112404 kB' 'MemAvailable: 47860580 kB' 'Buffers: 2708 kB' 'Cached: 11938856 kB' 'SwapCached: 0 kB' 'Active: 8644092 kB' 'Inactive: 3752604 kB' 'Active(anon): 8251960 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457984 kB' 'Mapped: 165448 kB' 'Shmem: 7796828 kB' 'KReclaimable: 194040 kB' 'Slab: 669412 kB' 'SReclaimable: 194040 kB' 'SUnreclaim: 475372 kB' 'KernelStack: 12816 kB' 'PageTables: 7444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9318216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198392 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.241 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.241 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.242 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.242 19:13:07 -- setup/common.sh@33 -- # echo 0 00:04:09.242 19:13:07 -- setup/common.sh@33 -- # return 0 00:04:09.242 19:13:07 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.242 19:13:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.242 19:13:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.242 19:13:07 -- setup/common.sh@18 -- # local node= 00:04:09.242 19:13:07 -- setup/common.sh@19 -- # local var val 00:04:09.242 19:13:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.242 19:13:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.242 19:13:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.242 19:13:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.242 19:13:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.242 19:13:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.242 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44108944 kB' 'MemAvailable: 47857120 kB' 'Buffers: 2708 kB' 'Cached: 11938872 kB' 'SwapCached: 0 kB' 'Active: 8644496 kB' 'Inactive: 3752604 kB' 'Active(anon): 8252364 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458776 kB' 'Mapped: 166288 kB' 'Shmem: 7796844 kB' 'KReclaimable: 194040 kB' 'Slab: 669388 kB' 'SReclaimable: 194040 kB' 'SUnreclaim: 475348 kB' 'KernelStack: 12800 kB' 'PageTables: 7412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9319436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198364 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.243 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.243 19:13:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.244 19:13:07 -- setup/common.sh@33 -- # echo 0 00:04:09.244 19:13:07 -- setup/common.sh@33 -- # return 0 00:04:09.244 19:13:07 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.244 19:13:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.244 nr_hugepages=1024 00:04:09.244 19:13:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.244 resv_hugepages=0 00:04:09.244 19:13:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.244 surplus_hugepages=0 00:04:09.244 19:13:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.244 anon_hugepages=0 00:04:09.244 19:13:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.244 19:13:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.244 19:13:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.244 19:13:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.244 19:13:07 -- setup/common.sh@18 -- # local node= 00:04:09.244 19:13:07 -- setup/common.sh@19 -- # local var val 00:04:09.244 19:13:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.244 19:13:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.244 19:13:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.244 19:13:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.244 19:13:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.244 19:13:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44112148 kB' 'MemAvailable: 47860324 kB' 'Buffers: 2708 kB' 'Cached: 11938888 kB' 'SwapCached: 0 kB' 'Active: 8638576 kB' 'Inactive: 3752604 kB' 'Active(anon): 8246444 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452836 kB' 'Mapped: 165372 kB' 'Shmem: 7796860 kB' 'KReclaimable: 194040 kB' 'Slab: 669464 kB' 'SReclaimable: 194040 kB' 'SUnreclaim: 475424 kB' 'KernelStack: 12816 kB' 'PageTables: 7488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9313328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198376 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.244 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.244 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.245 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.245 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.245 19:13:07 -- setup/common.sh@33 -- # echo 1024 00:04:09.245 19:13:07 -- setup/common.sh@33 -- # return 0 00:04:09.245 19:13:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.245 19:13:07 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.245 19:13:07 -- setup/hugepages.sh@27 -- # local node 00:04:09.245 19:13:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.245 19:13:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.245 19:13:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.245 19:13:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.245 19:13:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.245 19:13:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.245 19:13:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.245 19:13:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.245 19:13:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.245 19:13:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.245 19:13:07 -- setup/common.sh@18 -- # local node=0 00:04:09.246 19:13:07 -- setup/common.sh@19 -- # local var val 00:04:09.246 19:13:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.246 19:13:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.246 19:13:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.246 19:13:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.246 19:13:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.246 19:13:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825856 kB' 'MemFree: 21083216 kB' 'MemUsed: 11742640 kB' 'SwapCached: 0 kB' 'Active: 5786804 kB' 'Inactive: 3677028 kB' 'Active(anon): 5569140 kB' 'Inactive(anon): 0 kB' 'Active(file): 217664 kB' 'Inactive(file): 3677028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9339568 kB' 'Mapped: 81708 kB' 'AnonPages: 127384 kB' 'Shmem: 5444876 kB' 'KernelStack: 7384 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96296 kB' 'Slab: 381792 kB' 'SReclaimable: 96296 kB' 'SUnreclaim: 285496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.246 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.246 19:13:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@33 -- # echo 0 00:04:09.247 19:13:07 -- setup/common.sh@33 -- # return 0 00:04:09.247 19:13:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.247 19:13:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.247 19:13:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.247 19:13:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:09.247 19:13:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.247 19:13:07 -- setup/common.sh@18 -- # local node=1 00:04:09.247 19:13:07 -- setup/common.sh@19 -- # local var val 00:04:09.247 19:13:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.247 19:13:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.247 19:13:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:09.247 19:13:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:09.247 19:13:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.247 19:13:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27709816 kB' 'MemFree: 23028932 kB' 'MemUsed: 4680884 kB' 'SwapCached: 0 kB' 'Active: 2852088 kB' 'Inactive: 75576 kB' 'Active(anon): 2677620 kB' 'Inactive(anon): 0 kB' 'Active(file): 174468 kB' 'Inactive(file): 75576 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2602060 kB' 'Mapped: 83664 kB' 'AnonPages: 325776 kB' 'Shmem: 2352016 kB' 'KernelStack: 5464 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97744 kB' 'Slab: 287672 kB' 'SReclaimable: 97744 kB' 'SUnreclaim: 189928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.247 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.247 19:13:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # continue 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.248 19:13:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.248 19:13:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.248 19:13:07 -- setup/common.sh@33 -- # echo 0 00:04:09.248 19:13:07 -- setup/common.sh@33 -- # return 0 00:04:09.248 19:13:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.248 19:13:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.248 19:13:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.248 19:13:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.248 19:13:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:09.248 node0=512 expecting 512 00:04:09.248 19:13:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.248 19:13:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.248 19:13:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.248 19:13:07 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:09.248 node1=512 expecting 512 00:04:09.248 19:13:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:09.248 00:04:09.248 real 0m1.592s 00:04:09.248 user 0m0.685s 00:04:09.248 sys 0m0.875s 00:04:09.248 19:13:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.248 19:13:07 -- common/autotest_common.sh@10 -- # set +x 00:04:09.248 ************************************ 00:04:09.248 END TEST per_node_1G_alloc 00:04:09.248 ************************************ 00:04:09.509 19:13:07 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:09.509 19:13:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.509 19:13:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.509 19:13:07 -- common/autotest_common.sh@10 -- # set +x 00:04:09.509 ************************************ 00:04:09.509 START TEST even_2G_alloc 00:04:09.509 ************************************ 00:04:09.509 19:13:07 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:09.509 19:13:07 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:09.509 19:13:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.509 19:13:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.509 19:13:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.509 19:13:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.509 19:13:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.509 19:13:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.509 19:13:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.509 19:13:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.509 19:13:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.509 19:13:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.509 19:13:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.509 19:13:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.509 19:13:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.509 19:13:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.509 19:13:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:09.509 19:13:07 -- setup/hugepages.sh@83 -- # : 512 00:04:09.509 19:13:07 -- setup/hugepages.sh@84 -- # : 1 00:04:09.509 19:13:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.509 19:13:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:09.509 19:13:07 -- setup/hugepages.sh@83 -- # : 0 00:04:09.509 19:13:07 -- setup/hugepages.sh@84 -- # : 0 00:04:09.509 19:13:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.509 19:13:07 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:09.509 19:13:07 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:09.509 19:13:07 -- setup/hugepages.sh@153 -- # setup output 00:04:09.509 19:13:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.509 19:13:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.448 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.448 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.448 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.448 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.448 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.448 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.448 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.448 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.448 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:10.448 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.448 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.448 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.448 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.448 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.448 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.448 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.448 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:10.709 19:13:08 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:10.709 19:13:08 -- setup/hugepages.sh@89 -- # local node 00:04:10.709 19:13:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.709 19:13:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.709 19:13:08 -- setup/hugepages.sh@92 -- # local surp 00:04:10.709 19:13:08 -- setup/hugepages.sh@93 -- # local resv 00:04:10.709 19:13:08 -- setup/hugepages.sh@94 -- # local anon 00:04:10.709 19:13:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.709 19:13:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.709 19:13:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.709 19:13:08 -- setup/common.sh@18 -- # local node= 00:04:10.709 19:13:08 -- setup/common.sh@19 -- # local var val 00:04:10.709 19:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.709 19:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.709 19:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.709 19:13:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.709 19:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.709 19:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44083516 kB' 'MemAvailable: 47831676 kB' 'Buffers: 2708 kB' 'Cached: 11938948 kB' 'SwapCached: 0 kB' 'Active: 8637692 kB' 'Inactive: 3752604 kB' 'Active(anon): 8245560 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451896 kB' 'Mapped: 164440 kB' 'Shmem: 7796920 kB' 'KReclaimable: 194008 kB' 'Slab: 669084 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475076 kB' 'KernelStack: 12800 kB' 'PageTables: 7264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9299956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198408 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.709 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.709 19:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.709 19:13:08 -- setup/common.sh@33 -- # echo 0 00:04:10.709 19:13:08 -- setup/common.sh@33 -- # return 0 00:04:10.709 19:13:08 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.709 19:13:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.709 19:13:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.709 19:13:08 -- setup/common.sh@18 -- # local node= 00:04:10.709 19:13:08 -- setup/common.sh@19 -- # local var val 00:04:10.709 19:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.709 19:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.710 19:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.710 19:13:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.710 19:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.710 19:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44086152 kB' 'MemAvailable: 47834312 kB' 'Buffers: 2708 kB' 'Cached: 11938952 kB' 'SwapCached: 0 kB' 'Active: 8637504 kB' 'Inactive: 3752604 kB' 'Active(anon): 8245372 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451640 kB' 'Mapped: 164404 kB' 'Shmem: 7796924 kB' 'KReclaimable: 194008 kB' 'Slab: 669068 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475060 kB' 'KernelStack: 12800 kB' 'PageTables: 7260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9299968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198376 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.710 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.710 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.711 19:13:08 -- setup/common.sh@33 -- # echo 0 00:04:10.711 19:13:08 -- setup/common.sh@33 -- # return 0 00:04:10.711 19:13:08 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.711 19:13:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.711 19:13:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.711 19:13:08 -- setup/common.sh@18 -- # local node= 00:04:10.711 19:13:08 -- setup/common.sh@19 -- # local var val 00:04:10.711 19:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.711 19:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.711 19:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.711 19:13:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.711 19:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.711 19:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44087812 kB' 'MemAvailable: 47835972 kB' 'Buffers: 2708 kB' 'Cached: 11938964 kB' 'SwapCached: 0 kB' 'Active: 8637416 kB' 'Inactive: 3752604 kB' 'Active(anon): 8245284 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451572 kB' 'Mapped: 164404 kB' 'Shmem: 7796936 kB' 'KReclaimable: 194008 kB' 'Slab: 669060 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475052 kB' 'KernelStack: 12800 kB' 'PageTables: 7212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9299984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198360 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.711 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.711 19:13:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.712 19:13:08 -- setup/common.sh@33 -- # echo 0 00:04:10.712 19:13:08 -- setup/common.sh@33 -- # return 0 00:04:10.712 19:13:08 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.712 19:13:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.712 nr_hugepages=1024 00:04:10.712 19:13:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.712 resv_hugepages=0 00:04:10.712 19:13:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.712 surplus_hugepages=0 00:04:10.712 19:13:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.712 anon_hugepages=0 00:04:10.712 19:13:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.712 19:13:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.712 19:13:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.712 19:13:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.712 19:13:08 -- setup/common.sh@18 -- # local node= 00:04:10.712 19:13:08 -- setup/common.sh@19 -- # local var val 00:04:10.712 19:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.712 19:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.712 19:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.712 19:13:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.712 19:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.712 19:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44087812 kB' 'MemAvailable: 47835972 kB' 'Buffers: 2708 kB' 'Cached: 11938976 kB' 'SwapCached: 0 kB' 'Active: 8637816 kB' 'Inactive: 3752604 kB' 'Active(anon): 8245684 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451892 kB' 'Mapped: 164404 kB' 'Shmem: 7796948 kB' 'KReclaimable: 194008 kB' 'Slab: 669060 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475052 kB' 'KernelStack: 12816 kB' 'PageTables: 7268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9299996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198344 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.712 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.712 19:13:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.713 19:13:08 -- setup/common.sh@33 -- # echo 1024 00:04:10.713 19:13:08 -- setup/common.sh@33 -- # return 0 00:04:10.713 19:13:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.713 19:13:08 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.713 19:13:08 -- setup/hugepages.sh@27 -- # local node 00:04:10.713 19:13:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.713 19:13:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.713 19:13:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.713 19:13:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.713 19:13:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.713 19:13:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.713 19:13:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.713 19:13:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.713 19:13:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.713 19:13:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.713 19:13:08 -- setup/common.sh@18 -- # local node=0 00:04:10.713 19:13:08 -- setup/common.sh@19 -- # local var val 00:04:10.713 19:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.713 19:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.713 19:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.713 19:13:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.713 19:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.713 19:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825856 kB' 'MemFree: 21087264 kB' 'MemUsed: 11738592 kB' 'SwapCached: 0 kB' 'Active: 5786480 kB' 'Inactive: 3677028 kB' 'Active(anon): 5568816 kB' 'Inactive(anon): 0 kB' 'Active(file): 217664 kB' 'Inactive(file): 3677028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9339592 kB' 'Mapped: 80860 kB' 'AnonPages: 127048 kB' 'Shmem: 5444900 kB' 'KernelStack: 7336 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96296 kB' 'Slab: 381612 kB' 'SReclaimable: 96296 kB' 'SUnreclaim: 285316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.713 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.713 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:13:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@33 -- # echo 0 00:04:10.972 19:13:08 -- setup/common.sh@33 -- # return 0 00:04:10.972 19:13:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.972 19:13:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.972 19:13:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.972 19:13:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.972 19:13:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.972 19:13:08 -- setup/common.sh@18 -- # local node=1 00:04:10.972 19:13:08 -- setup/common.sh@19 -- # local var val 00:04:10.972 19:13:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.972 19:13:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.972 19:13:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.972 19:13:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.972 19:13:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.972 19:13:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27709816 kB' 'MemFree: 23001292 kB' 'MemUsed: 4708524 kB' 'SwapCached: 0 kB' 'Active: 2850692 kB' 'Inactive: 75576 kB' 'Active(anon): 2676224 kB' 'Inactive(anon): 0 kB' 'Active(file): 174468 kB' 'Inactive(file): 75576 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2602096 kB' 'Mapped: 83544 kB' 'AnonPages: 324196 kB' 'Shmem: 2352052 kB' 'KernelStack: 5448 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97712 kB' 'Slab: 287448 kB' 'SReclaimable: 97712 kB' 'SUnreclaim: 189736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:08 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # continue 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:13:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:13:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:13:09 -- setup/common.sh@33 -- # echo 0 00:04:10.973 19:13:09 -- setup/common.sh@33 -- # return 0 00:04:10.973 19:13:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.973 19:13:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.973 19:13:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.973 19:13:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.973 19:13:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.973 node0=512 expecting 512 00:04:10.973 19:13:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.973 19:13:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.973 19:13:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.973 19:13:09 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:10.973 node1=512 expecting 512 00:04:10.973 19:13:09 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:10.973 00:04:10.973 real 0m1.495s 00:04:10.973 user 0m0.599s 00:04:10.973 sys 0m0.860s 00:04:10.973 19:13:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:10.973 19:13:09 -- common/autotest_common.sh@10 -- # set +x 00:04:10.973 ************************************ 00:04:10.973 END TEST even_2G_alloc 00:04:10.973 ************************************ 00:04:10.973 19:13:09 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:10.973 19:13:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.973 19:13:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.973 19:13:09 -- common/autotest_common.sh@10 -- # set +x 00:04:10.973 ************************************ 00:04:10.973 START TEST odd_alloc 00:04:10.973 ************************************ 00:04:10.973 19:13:09 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:10.973 19:13:09 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:10.973 19:13:09 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:10.973 19:13:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.973 19:13:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.973 19:13:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:10.973 19:13:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.973 19:13:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.973 19:13:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.973 19:13:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:10.973 19:13:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.973 19:13:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.973 19:13:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.973 19:13:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.973 19:13:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.973 19:13:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.973 19:13:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.973 19:13:09 -- setup/hugepages.sh@83 -- # : 513 00:04:10.973 19:13:09 -- setup/hugepages.sh@84 -- # : 1 00:04:10.973 19:13:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.973 19:13:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:10.973 19:13:09 -- setup/hugepages.sh@83 -- # : 0 00:04:10.973 19:13:09 -- setup/hugepages.sh@84 -- # : 0 00:04:10.973 19:13:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.973 19:13:09 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:10.973 19:13:09 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:10.973 19:13:09 -- setup/hugepages.sh@160 -- # setup output 00:04:10.973 19:13:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.973 19:13:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.355 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.355 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.355 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.355 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.355 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.355 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.355 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.355 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.355 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:12.355 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.355 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.355 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.355 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.355 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.355 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.355 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.355 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:12.355 19:13:10 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:12.355 19:13:10 -- setup/hugepages.sh@89 -- # local node 00:04:12.355 19:13:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.355 19:13:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.355 19:13:10 -- setup/hugepages.sh@92 -- # local surp 00:04:12.355 19:13:10 -- setup/hugepages.sh@93 -- # local resv 00:04:12.355 19:13:10 -- setup/hugepages.sh@94 -- # local anon 00:04:12.355 19:13:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.355 19:13:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.355 19:13:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.355 19:13:10 -- setup/common.sh@18 -- # local node= 00:04:12.355 19:13:10 -- setup/common.sh@19 -- # local var val 00:04:12.355 19:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.355 19:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.355 19:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.355 19:13:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.355 19:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.355 19:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.355 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.355 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44086456 kB' 'MemAvailable: 47834620 kB' 'Buffers: 2708 kB' 'Cached: 11939040 kB' 'SwapCached: 0 kB' 'Active: 8638312 kB' 'Inactive: 3752604 kB' 'Active(anon): 8246180 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452464 kB' 'Mapped: 164404 kB' 'Shmem: 7797012 kB' 'KReclaimable: 194016 kB' 'Slab: 669040 kB' 'SReclaimable: 194016 kB' 'SUnreclaim: 475024 kB' 'KernelStack: 12864 kB' 'PageTables: 7340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37606840 kB' 'Committed_AS: 9300048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198392 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.356 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.356 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.357 19:13:10 -- setup/common.sh@33 -- # echo 0 00:04:12.357 19:13:10 -- setup/common.sh@33 -- # return 0 00:04:12.357 19:13:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.357 19:13:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.357 19:13:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.357 19:13:10 -- setup/common.sh@18 -- # local node= 00:04:12.357 19:13:10 -- setup/common.sh@19 -- # local var val 00:04:12.357 19:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.357 19:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.357 19:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.357 19:13:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.357 19:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.357 19:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44088772 kB' 'MemAvailable: 47836932 kB' 'Buffers: 2708 kB' 'Cached: 11939044 kB' 'SwapCached: 0 kB' 'Active: 8638132 kB' 'Inactive: 3752604 kB' 'Active(anon): 8246000 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452264 kB' 'Mapped: 164476 kB' 'Shmem: 7797016 kB' 'KReclaimable: 194008 kB' 'Slab: 669028 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475020 kB' 'KernelStack: 12848 kB' 'PageTables: 7300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37606840 kB' 'Committed_AS: 9300060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198344 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.357 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.357 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.358 19:13:10 -- setup/common.sh@33 -- # echo 0 00:04:12.358 19:13:10 -- setup/common.sh@33 -- # return 0 00:04:12.358 19:13:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.358 19:13:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.358 19:13:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.358 19:13:10 -- setup/common.sh@18 -- # local node= 00:04:12.358 19:13:10 -- setup/common.sh@19 -- # local var val 00:04:12.358 19:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.358 19:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.358 19:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.358 19:13:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.358 19:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.358 19:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44089080 kB' 'MemAvailable: 47837240 kB' 'Buffers: 2708 kB' 'Cached: 11939048 kB' 'SwapCached: 0 kB' 'Active: 8637708 kB' 'Inactive: 3752604 kB' 'Active(anon): 8245576 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451784 kB' 'Mapped: 164404 kB' 'Shmem: 7797020 kB' 'KReclaimable: 194008 kB' 'Slab: 668988 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 474980 kB' 'KernelStack: 12848 kB' 'PageTables: 7256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37606840 kB' 'Committed_AS: 9300076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198376 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.358 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.358 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.359 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.359 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.359 19:13:10 -- setup/common.sh@33 -- # echo 0 00:04:12.359 19:13:10 -- setup/common.sh@33 -- # return 0 00:04:12.359 19:13:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.359 19:13:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:12.359 nr_hugepages=1025 00:04:12.359 19:13:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.359 resv_hugepages=0 00:04:12.359 19:13:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.359 surplus_hugepages=0 00:04:12.359 19:13:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.359 anon_hugepages=0 00:04:12.359 19:13:10 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.359 19:13:10 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:12.360 19:13:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.360 19:13:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.360 19:13:10 -- setup/common.sh@18 -- # local node= 00:04:12.360 19:13:10 -- setup/common.sh@19 -- # local var val 00:04:12.360 19:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.360 19:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.360 19:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.360 19:13:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.360 19:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.360 19:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44089080 kB' 'MemAvailable: 47837240 kB' 'Buffers: 2708 kB' 'Cached: 11939068 kB' 'SwapCached: 0 kB' 'Active: 8638008 kB' 'Inactive: 3752604 kB' 'Active(anon): 8245876 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452064 kB' 'Mapped: 164404 kB' 'Shmem: 7797040 kB' 'KReclaimable: 194008 kB' 'Slab: 668988 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 474980 kB' 'KernelStack: 12848 kB' 'PageTables: 7256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37606840 kB' 'Committed_AS: 9300092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198376 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.360 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.360 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.361 19:13:10 -- setup/common.sh@33 -- # echo 1025 00:04:12.361 19:13:10 -- setup/common.sh@33 -- # return 0 00:04:12.361 19:13:10 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.361 19:13:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.361 19:13:10 -- setup/hugepages.sh@27 -- # local node 00:04:12.361 19:13:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.361 19:13:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.361 19:13:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.361 19:13:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:12.361 19:13:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.361 19:13:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.361 19:13:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.361 19:13:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.361 19:13:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.361 19:13:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.361 19:13:10 -- setup/common.sh@18 -- # local node=0 00:04:12.361 19:13:10 -- setup/common.sh@19 -- # local var val 00:04:12.361 19:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.361 19:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.361 19:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.361 19:13:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.361 19:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.361 19:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825856 kB' 'MemFree: 21098040 kB' 'MemUsed: 11727816 kB' 'SwapCached: 0 kB' 'Active: 5783272 kB' 'Inactive: 3677028 kB' 'Active(anon): 5565608 kB' 'Inactive(anon): 0 kB' 'Active(file): 217664 kB' 'Inactive(file): 3677028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9339636 kB' 'Mapped: 80860 kB' 'AnonPages: 123760 kB' 'Shmem: 5444944 kB' 'KernelStack: 7352 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96304 kB' 'Slab: 381592 kB' 'SReclaimable: 96304 kB' 'SUnreclaim: 285288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.361 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.361 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@33 -- # echo 0 00:04:12.362 19:13:10 -- setup/common.sh@33 -- # return 0 00:04:12.362 19:13:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.362 19:13:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.362 19:13:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.362 19:13:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:12.362 19:13:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.362 19:13:10 -- setup/common.sh@18 -- # local node=1 00:04:12.362 19:13:10 -- setup/common.sh@19 -- # local var val 00:04:12.362 19:13:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.362 19:13:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.362 19:13:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.362 19:13:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.362 19:13:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.362 19:13:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27709816 kB' 'MemFree: 22990548 kB' 'MemUsed: 4719268 kB' 'SwapCached: 0 kB' 'Active: 2854256 kB' 'Inactive: 75576 kB' 'Active(anon): 2679788 kB' 'Inactive(anon): 0 kB' 'Active(file): 174468 kB' 'Inactive(file): 75576 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2602160 kB' 'Mapped: 83544 kB' 'AnonPages: 327764 kB' 'Shmem: 2352116 kB' 'KernelStack: 5464 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97704 kB' 'Slab: 287388 kB' 'SReclaimable: 97704 kB' 'SUnreclaim: 189684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.362 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.362 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # continue 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.363 19:13:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.363 19:13:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.363 19:13:10 -- setup/common.sh@33 -- # echo 0 00:04:12.363 19:13:10 -- setup/common.sh@33 -- # return 0 00:04:12.363 19:13:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.363 19:13:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.363 19:13:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.363 19:13:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.363 19:13:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:12.363 node0=512 expecting 513 00:04:12.363 19:13:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.363 19:13:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.363 19:13:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.363 19:13:10 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:12.363 node1=513 expecting 512 00:04:12.363 19:13:10 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:12.363 00:04:12.363 real 0m1.537s 00:04:12.363 user 0m0.640s 00:04:12.363 sys 0m0.863s 00:04:12.363 19:13:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:12.363 19:13:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.363 ************************************ 00:04:12.363 END TEST odd_alloc 00:04:12.363 ************************************ 00:04:12.363 19:13:10 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:12.363 19:13:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.363 19:13:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.363 19:13:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.363 ************************************ 00:04:12.363 START TEST custom_alloc 00:04:12.363 ************************************ 00:04:12.363 19:13:10 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:12.363 19:13:10 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:12.363 19:13:10 -- setup/hugepages.sh@169 -- # local node 00:04:12.363 19:13:10 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:12.363 19:13:10 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:12.363 19:13:10 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:12.363 19:13:10 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:12.363 19:13:10 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:12.363 19:13:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.363 19:13:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.363 19:13:10 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:12.363 19:13:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.363 19:13:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.363 19:13:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.363 19:13:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:12.363 19:13:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:12.364 19:13:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.364 19:13:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.364 19:13:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:12.364 19:13:10 -- setup/hugepages.sh@83 -- # : 256 00:04:12.364 19:13:10 -- setup/hugepages.sh@84 -- # : 1 00:04:12.364 19:13:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:12.364 19:13:10 -- setup/hugepages.sh@83 -- # : 0 00:04:12.364 19:13:10 -- setup/hugepages.sh@84 -- # : 0 00:04:12.364 19:13:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:12.364 19:13:10 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:12.364 19:13:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:12.364 19:13:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:12.364 19:13:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.364 19:13:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.364 19:13:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.364 19:13:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:12.364 19:13:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:12.364 19:13:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.364 19:13:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.364 19:13:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:12.364 19:13:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:12.364 19:13:10 -- setup/hugepages.sh@78 -- # return 0 00:04:12.364 19:13:10 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:12.364 19:13:10 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:12.364 19:13:10 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:12.364 19:13:10 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:12.364 19:13:10 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:12.364 19:13:10 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:12.364 19:13:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.364 19:13:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.364 19:13:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:12.364 19:13:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:12.364 19:13:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.364 19:13:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.364 19:13:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:12.364 19:13:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:12.364 19:13:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:12.364 19:13:10 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:12.364 19:13:10 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:12.364 19:13:10 -- setup/hugepages.sh@78 -- # return 0 00:04:12.364 19:13:10 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:12.622 19:13:10 -- setup/hugepages.sh@187 -- # setup output 00:04:12.622 19:13:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.622 19:13:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.561 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:13.561 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.561 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:13.561 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:13.561 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:13.561 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:13.561 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:13.561 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:13.561 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:13.561 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:13.561 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:13.561 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:13.561 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:13.561 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:13.561 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:13.561 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:13.561 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:13.826 19:13:11 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:13.826 19:13:11 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:13.826 19:13:11 -- setup/hugepages.sh@89 -- # local node 00:04:13.826 19:13:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.826 19:13:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.826 19:13:11 -- setup/hugepages.sh@92 -- # local surp 00:04:13.826 19:13:11 -- setup/hugepages.sh@93 -- # local resv 00:04:13.826 19:13:11 -- setup/hugepages.sh@94 -- # local anon 00:04:13.826 19:13:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.826 19:13:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.826 19:13:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.826 19:13:11 -- setup/common.sh@18 -- # local node= 00:04:13.826 19:13:11 -- setup/common.sh@19 -- # local var val 00:04:13.826 19:13:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.826 19:13:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.826 19:13:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.826 19:13:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.826 19:13:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.826 19:13:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 43014624 kB' 'MemAvailable: 46762784 kB' 'Buffers: 2708 kB' 'Cached: 11939144 kB' 'SwapCached: 0 kB' 'Active: 8644120 kB' 'Inactive: 3752604 kB' 'Active(anon): 8251988 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458064 kB' 'Mapped: 165196 kB' 'Shmem: 7797116 kB' 'KReclaimable: 194008 kB' 'Slab: 668940 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 474932 kB' 'KernelStack: 12848 kB' 'PageTables: 7244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37083576 kB' 'Committed_AS: 9306404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198476 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.826 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.826 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.827 19:13:11 -- setup/common.sh@33 -- # echo 0 00:04:13.827 19:13:11 -- setup/common.sh@33 -- # return 0 00:04:13.827 19:13:11 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.827 19:13:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.827 19:13:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.827 19:13:11 -- setup/common.sh@18 -- # local node= 00:04:13.827 19:13:11 -- setup/common.sh@19 -- # local var val 00:04:13.827 19:13:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.827 19:13:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.827 19:13:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.827 19:13:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.827 19:13:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.827 19:13:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 43018644 kB' 'MemAvailable: 46766804 kB' 'Buffers: 2708 kB' 'Cached: 11939148 kB' 'SwapCached: 0 kB' 'Active: 8643380 kB' 'Inactive: 3752604 kB' 'Active(anon): 8251248 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457316 kB' 'Mapped: 164836 kB' 'Shmem: 7797120 kB' 'KReclaimable: 194008 kB' 'Slab: 668896 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 474888 kB' 'KernelStack: 12800 kB' 'PageTables: 7096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37083576 kB' 'Committed_AS: 9305480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198440 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.827 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.827 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.828 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.828 19:13:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.828 19:13:11 -- setup/common.sh@33 -- # echo 0 00:04:13.828 19:13:11 -- setup/common.sh@33 -- # return 0 00:04:13.828 19:13:11 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.828 19:13:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.828 19:13:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.829 19:13:11 -- setup/common.sh@18 -- # local node= 00:04:13.829 19:13:11 -- setup/common.sh@19 -- # local var val 00:04:13.829 19:13:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.829 19:13:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.829 19:13:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.829 19:13:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.829 19:13:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.829 19:13:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 43017636 kB' 'MemAvailable: 46765796 kB' 'Buffers: 2708 kB' 'Cached: 11939160 kB' 'SwapCached: 0 kB' 'Active: 8641140 kB' 'Inactive: 3752604 kB' 'Active(anon): 8249008 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455048 kB' 'Mapped: 164848 kB' 'Shmem: 7797132 kB' 'KReclaimable: 194008 kB' 'Slab: 668896 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 474888 kB' 'KernelStack: 12816 kB' 'PageTables: 7092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37083576 kB' 'Committed_AS: 9304044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198440 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.829 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.829 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:11 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.830 19:13:12 -- setup/common.sh@33 -- # echo 0 00:04:13.830 19:13:12 -- setup/common.sh@33 -- # return 0 00:04:13.830 19:13:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.830 19:13:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:13.830 nr_hugepages=1536 00:04:13.830 19:13:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.830 resv_hugepages=0 00:04:13.830 19:13:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.830 surplus_hugepages=0 00:04:13.830 19:13:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.830 anon_hugepages=0 00:04:13.830 19:13:12 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.830 19:13:12 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:13.830 19:13:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.830 19:13:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.830 19:13:12 -- setup/common.sh@18 -- # local node= 00:04:13.830 19:13:12 -- setup/common.sh@19 -- # local var val 00:04:13.830 19:13:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.830 19:13:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.830 19:13:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.830 19:13:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.830 19:13:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.830 19:13:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 43015632 kB' 'MemAvailable: 46763792 kB' 'Buffers: 2708 kB' 'Cached: 11939172 kB' 'SwapCached: 0 kB' 'Active: 8643712 kB' 'Inactive: 3752604 kB' 'Active(anon): 8251580 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457676 kB' 'Mapped: 165240 kB' 'Shmem: 7797144 kB' 'KReclaimable: 194008 kB' 'Slab: 668928 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 474920 kB' 'KernelStack: 12816 kB' 'PageTables: 7132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37083576 kB' 'Committed_AS: 9306448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198444 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.830 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.830 19:13:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.831 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.831 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.832 19:13:12 -- setup/common.sh@33 -- # echo 1536 00:04:13.832 19:13:12 -- setup/common.sh@33 -- # return 0 00:04:13.832 19:13:12 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:13.832 19:13:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.832 19:13:12 -- setup/hugepages.sh@27 -- # local node 00:04:13.832 19:13:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.832 19:13:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.832 19:13:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.832 19:13:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.832 19:13:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.832 19:13:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.832 19:13:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.832 19:13:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.832 19:13:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.832 19:13:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.832 19:13:12 -- setup/common.sh@18 -- # local node=0 00:04:13.832 19:13:12 -- setup/common.sh@19 -- # local var val 00:04:13.832 19:13:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.832 19:13:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.832 19:13:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.832 19:13:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.832 19:13:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.832 19:13:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825856 kB' 'MemFree: 21097876 kB' 'MemUsed: 11727980 kB' 'SwapCached: 0 kB' 'Active: 5783512 kB' 'Inactive: 3677028 kB' 'Active(anon): 5565848 kB' 'Inactive(anon): 0 kB' 'Active(file): 217664 kB' 'Inactive(file): 3677028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9339648 kB' 'Mapped: 81124 kB' 'AnonPages: 124012 kB' 'Shmem: 5444956 kB' 'KernelStack: 7400 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96304 kB' 'Slab: 381540 kB' 'SReclaimable: 96304 kB' 'SUnreclaim: 285236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.832 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.832 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@33 -- # echo 0 00:04:13.833 19:13:12 -- setup/common.sh@33 -- # return 0 00:04:13.833 19:13:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.833 19:13:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.833 19:13:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.833 19:13:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.833 19:13:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.833 19:13:12 -- setup/common.sh@18 -- # local node=1 00:04:13.833 19:13:12 -- setup/common.sh@19 -- # local var val 00:04:13.833 19:13:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.833 19:13:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.833 19:13:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.833 19:13:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.833 19:13:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.833 19:13:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27709816 kB' 'MemFree: 21917756 kB' 'MemUsed: 5792060 kB' 'SwapCached: 0 kB' 'Active: 2854832 kB' 'Inactive: 75576 kB' 'Active(anon): 2680364 kB' 'Inactive(anon): 0 kB' 'Active(file): 174468 kB' 'Inactive(file): 75576 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2602248 kB' 'Mapped: 83696 kB' 'AnonPages: 328276 kB' 'Shmem: 2352204 kB' 'KernelStack: 5416 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97704 kB' 'Slab: 287388 kB' 'SReclaimable: 97704 kB' 'SUnreclaim: 189684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.833 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.833 19:13:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # continue 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.834 19:13:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.834 19:13:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.834 19:13:12 -- setup/common.sh@33 -- # echo 0 00:04:13.834 19:13:12 -- setup/common.sh@33 -- # return 0 00:04:13.834 19:13:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.834 19:13:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.834 19:13:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.834 19:13:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.834 19:13:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.834 node0=512 expecting 512 00:04:13.834 19:13:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.834 19:13:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.834 19:13:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.834 19:13:12 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:13.834 node1=1024 expecting 1024 00:04:13.834 19:13:12 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:13.834 00:04:13.834 real 0m1.473s 00:04:13.834 user 0m0.575s 00:04:13.834 sys 0m0.862s 00:04:13.834 19:13:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:13.834 19:13:12 -- common/autotest_common.sh@10 -- # set +x 00:04:13.834 ************************************ 00:04:13.834 END TEST custom_alloc 00:04:13.834 ************************************ 00:04:14.093 19:13:12 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:14.093 19:13:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.093 19:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.093 19:13:12 -- common/autotest_common.sh@10 -- # set +x 00:04:14.093 ************************************ 00:04:14.093 START TEST no_shrink_alloc 00:04:14.093 ************************************ 00:04:14.093 19:13:12 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:14.093 19:13:12 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:14.093 19:13:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.093 19:13:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.093 19:13:12 -- setup/hugepages.sh@51 -- # shift 00:04:14.093 19:13:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.093 19:13:12 -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.093 19:13:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.093 19:13:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.093 19:13:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.093 19:13:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.093 19:13:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.093 19:13:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.093 19:13:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.093 19:13:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.093 19:13:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.093 19:13:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.093 19:13:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.093 19:13:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:14.093 19:13:12 -- setup/hugepages.sh@73 -- # return 0 00:04:14.093 19:13:12 -- setup/hugepages.sh@198 -- # setup output 00:04:14.093 19:13:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.093 19:13:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.475 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:15.475 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.475 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:15.475 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:15.475 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:15.475 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:15.475 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:15.475 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:15.475 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:15.475 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:15.475 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:15.475 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:15.475 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:15.475 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:15.475 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:15.475 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:15.475 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:15.475 19:13:13 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:15.475 19:13:13 -- setup/hugepages.sh@89 -- # local node 00:04:15.475 19:13:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.475 19:13:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.475 19:13:13 -- setup/hugepages.sh@92 -- # local surp 00:04:15.475 19:13:13 -- setup/hugepages.sh@93 -- # local resv 00:04:15.475 19:13:13 -- setup/hugepages.sh@94 -- # local anon 00:04:15.475 19:13:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.475 19:13:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.475 19:13:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.475 19:13:13 -- setup/common.sh@18 -- # local node= 00:04:15.475 19:13:13 -- setup/common.sh@19 -- # local var val 00:04:15.475 19:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.475 19:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.475 19:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.475 19:13:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.475 19:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.475 19:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.475 19:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44039236 kB' 'MemAvailable: 47787396 kB' 'Buffers: 2708 kB' 'Cached: 11939228 kB' 'SwapCached: 0 kB' 'Active: 8639184 kB' 'Inactive: 3752604 kB' 'Active(anon): 8247052 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453028 kB' 'Mapped: 164448 kB' 'Shmem: 7797200 kB' 'KReclaimable: 194008 kB' 'Slab: 669032 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475024 kB' 'KernelStack: 12928 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9302432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198504 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.475 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.475 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.476 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.476 19:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.477 19:13:13 -- setup/common.sh@33 -- # echo 0 00:04:15.477 19:13:13 -- setup/common.sh@33 -- # return 0 00:04:15.477 19:13:13 -- setup/hugepages.sh@97 -- # anon=0 00:04:15.477 19:13:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.477 19:13:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.477 19:13:13 -- setup/common.sh@18 -- # local node= 00:04:15.477 19:13:13 -- setup/common.sh@19 -- # local var val 00:04:15.477 19:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.477 19:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.477 19:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.477 19:13:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.477 19:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.477 19:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44040268 kB' 'MemAvailable: 47788428 kB' 'Buffers: 2708 kB' 'Cached: 11939228 kB' 'SwapCached: 0 kB' 'Active: 8639948 kB' 'Inactive: 3752604 kB' 'Active(anon): 8247816 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454316 kB' 'Mapped: 164396 kB' 'Shmem: 7797200 kB' 'KReclaimable: 194008 kB' 'Slab: 669040 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475032 kB' 'KernelStack: 12912 kB' 'PageTables: 7400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9303956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198488 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.477 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.477 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.478 19:13:13 -- setup/common.sh@33 -- # echo 0 00:04:15.478 19:13:13 -- setup/common.sh@33 -- # return 0 00:04:15.478 19:13:13 -- setup/hugepages.sh@99 -- # surp=0 00:04:15.478 19:13:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.478 19:13:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.478 19:13:13 -- setup/common.sh@18 -- # local node= 00:04:15.478 19:13:13 -- setup/common.sh@19 -- # local var val 00:04:15.478 19:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.478 19:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.478 19:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.478 19:13:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.478 19:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.478 19:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.478 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.478 19:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44040276 kB' 'MemAvailable: 47788436 kB' 'Buffers: 2708 kB' 'Cached: 11939244 kB' 'SwapCached: 0 kB' 'Active: 8639468 kB' 'Inactive: 3752604 kB' 'Active(anon): 8247336 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453360 kB' 'Mapped: 164444 kB' 'Shmem: 7797216 kB' 'KReclaimable: 194008 kB' 'Slab: 669040 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475032 kB' 'KernelStack: 12896 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9304580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198600 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.479 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.479 19:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.480 19:13:13 -- setup/common.sh@33 -- # echo 0 00:04:15.480 19:13:13 -- setup/common.sh@33 -- # return 0 00:04:15.480 19:13:13 -- setup/hugepages.sh@100 -- # resv=0 00:04:15.480 19:13:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.480 nr_hugepages=1024 00:04:15.480 19:13:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.480 resv_hugepages=0 00:04:15.480 19:13:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.480 surplus_hugepages=0 00:04:15.480 19:13:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.480 anon_hugepages=0 00:04:15.480 19:13:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.480 19:13:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.480 19:13:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.480 19:13:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.480 19:13:13 -- setup/common.sh@18 -- # local node= 00:04:15.480 19:13:13 -- setup/common.sh@19 -- # local var val 00:04:15.480 19:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.480 19:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.480 19:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.480 19:13:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.480 19:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.480 19:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44039452 kB' 'MemAvailable: 47787612 kB' 'Buffers: 2708 kB' 'Cached: 11939256 kB' 'SwapCached: 0 kB' 'Active: 8639996 kB' 'Inactive: 3752604 kB' 'Active(anon): 8247864 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453884 kB' 'Mapped: 164444 kB' 'Shmem: 7797228 kB' 'KReclaimable: 194008 kB' 'Slab: 669040 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475032 kB' 'KernelStack: 13216 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9303204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198776 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.480 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.480 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.481 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.481 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.482 19:13:13 -- setup/common.sh@33 -- # echo 1024 00:04:15.482 19:13:13 -- setup/common.sh@33 -- # return 0 00:04:15.482 19:13:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.482 19:13:13 -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.482 19:13:13 -- setup/hugepages.sh@27 -- # local node 00:04:15.482 19:13:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.482 19:13:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.482 19:13:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.482 19:13:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:15.482 19:13:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.482 19:13:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.482 19:13:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.482 19:13:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.482 19:13:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.482 19:13:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.482 19:13:13 -- setup/common.sh@18 -- # local node=0 00:04:15.482 19:13:13 -- setup/common.sh@19 -- # local var val 00:04:15.482 19:13:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.482 19:13:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.482 19:13:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.482 19:13:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.482 19:13:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.482 19:13:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825856 kB' 'MemFree: 20045304 kB' 'MemUsed: 12780552 kB' 'SwapCached: 0 kB' 'Active: 5785240 kB' 'Inactive: 3677028 kB' 'Active(anon): 5567576 kB' 'Inactive(anon): 0 kB' 'Active(file): 217664 kB' 'Inactive(file): 3677028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9339648 kB' 'Mapped: 80824 kB' 'AnonPages: 125728 kB' 'Shmem: 5444956 kB' 'KernelStack: 7848 kB' 'PageTables: 5192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96304 kB' 'Slab: 381444 kB' 'SReclaimable: 96304 kB' 'SUnreclaim: 285140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.482 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.482 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # continue 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.483 19:13:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.483 19:13:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.483 19:13:13 -- setup/common.sh@33 -- # echo 0 00:04:15.483 19:13:13 -- setup/common.sh@33 -- # return 0 00:04:15.483 19:13:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.483 19:13:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.483 19:13:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.483 19:13:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.483 19:13:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.483 node0=1024 expecting 1024 00:04:15.483 19:13:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.483 19:13:13 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:15.483 19:13:13 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:15.483 19:13:13 -- setup/hugepages.sh@202 -- # setup output 00:04:15.483 19:13:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.483 19:13:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.862 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:16.862 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.862 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:16.862 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:16.862 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:16.862 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:16.862 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:16.862 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:16.862 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:16.862 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:16.863 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:16.863 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:16.863 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:16.863 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:16.863 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:16.863 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:16.863 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:16.863 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:16.863 19:13:15 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:16.863 19:13:15 -- setup/hugepages.sh@89 -- # local node 00:04:16.863 19:13:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.863 19:13:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.863 19:13:15 -- setup/hugepages.sh@92 -- # local surp 00:04:16.863 19:13:15 -- setup/hugepages.sh@93 -- # local resv 00:04:16.863 19:13:15 -- setup/hugepages.sh@94 -- # local anon 00:04:16.863 19:13:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.863 19:13:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.863 19:13:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.863 19:13:15 -- setup/common.sh@18 -- # local node= 00:04:16.863 19:13:15 -- setup/common.sh@19 -- # local var val 00:04:16.863 19:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.863 19:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.863 19:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.863 19:13:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.863 19:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.863 19:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44028588 kB' 'MemAvailable: 47776748 kB' 'Buffers: 2708 kB' 'Cached: 11939308 kB' 'SwapCached: 0 kB' 'Active: 8638776 kB' 'Inactive: 3752604 kB' 'Active(anon): 8246644 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452656 kB' 'Mapped: 164556 kB' 'Shmem: 7797280 kB' 'KReclaimable: 194008 kB' 'Slab: 669008 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475000 kB' 'KernelStack: 12912 kB' 'PageTables: 7380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9300340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198472 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.863 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.863 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.864 19:13:15 -- setup/common.sh@33 -- # echo 0 00:04:16.864 19:13:15 -- setup/common.sh@33 -- # return 0 00:04:16.864 19:13:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:16.864 19:13:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.864 19:13:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.864 19:13:15 -- setup/common.sh@18 -- # local node= 00:04:16.864 19:13:15 -- setup/common.sh@19 -- # local var val 00:04:16.864 19:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.864 19:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.864 19:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.864 19:13:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.864 19:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.864 19:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44029220 kB' 'MemAvailable: 47777380 kB' 'Buffers: 2708 kB' 'Cached: 11939312 kB' 'SwapCached: 0 kB' 'Active: 8638840 kB' 'Inactive: 3752604 kB' 'Active(anon): 8246708 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452676 kB' 'Mapped: 164504 kB' 'Shmem: 7797284 kB' 'KReclaimable: 194008 kB' 'Slab: 669016 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475008 kB' 'KernelStack: 12896 kB' 'PageTables: 7292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9300352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198472 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.864 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.864 19:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.865 19:13:15 -- setup/common.sh@33 -- # echo 0 00:04:16.865 19:13:15 -- setup/common.sh@33 -- # return 0 00:04:16.865 19:13:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:16.865 19:13:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.865 19:13:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.865 19:13:15 -- setup/common.sh@18 -- # local node= 00:04:16.865 19:13:15 -- setup/common.sh@19 -- # local var val 00:04:16.865 19:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.865 19:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.865 19:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.865 19:13:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.865 19:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.865 19:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44029868 kB' 'MemAvailable: 47778028 kB' 'Buffers: 2708 kB' 'Cached: 11939328 kB' 'SwapCached: 0 kB' 'Active: 8638520 kB' 'Inactive: 3752604 kB' 'Active(anon): 8246388 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452316 kB' 'Mapped: 164500 kB' 'Shmem: 7797300 kB' 'KReclaimable: 194008 kB' 'Slab: 669008 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 475000 kB' 'KernelStack: 12848 kB' 'PageTables: 7124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9300372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198488 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.865 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.865 19:13:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.866 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.866 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.867 19:13:15 -- setup/common.sh@33 -- # echo 0 00:04:16.867 19:13:15 -- setup/common.sh@33 -- # return 0 00:04:16.867 19:13:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:16.867 19:13:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.867 nr_hugepages=1024 00:04:16.867 19:13:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.867 resv_hugepages=0 00:04:16.867 19:13:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.867 surplus_hugepages=0 00:04:16.867 19:13:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.867 anon_hugepages=0 00:04:16.867 19:13:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.867 19:13:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.867 19:13:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.867 19:13:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.867 19:13:15 -- setup/common.sh@18 -- # local node= 00:04:16.867 19:13:15 -- setup/common.sh@19 -- # local var val 00:04:16.867 19:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.867 19:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.867 19:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.867 19:13:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.867 19:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.867 19:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60535672 kB' 'MemFree: 44030484 kB' 'MemAvailable: 47778644 kB' 'Buffers: 2708 kB' 'Cached: 11939340 kB' 'SwapCached: 0 kB' 'Active: 8638512 kB' 'Inactive: 3752604 kB' 'Active(anon): 8246380 kB' 'Inactive(anon): 0 kB' 'Active(file): 392132 kB' 'Inactive(file): 3752604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452236 kB' 'Mapped: 164424 kB' 'Shmem: 7797312 kB' 'KReclaimable: 194008 kB' 'Slab: 668980 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 474972 kB' 'KernelStack: 12880 kB' 'PageTables: 7232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37607864 kB' 'Committed_AS: 9300388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198488 kB' 'VmallocChunk: 0 kB' 'Percpu: 33984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1474140 kB' 'DirectMap2M: 9979904 kB' 'DirectMap1G: 58720256 kB' 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.867 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.867 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.868 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.868 19:13:15 -- setup/common.sh@33 -- # echo 1024 00:04:16.868 19:13:15 -- setup/common.sh@33 -- # return 0 00:04:16.868 19:13:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.868 19:13:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.868 19:13:15 -- setup/hugepages.sh@27 -- # local node 00:04:16.868 19:13:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.868 19:13:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.868 19:13:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.868 19:13:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.868 19:13:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.868 19:13:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.868 19:13:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.868 19:13:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.868 19:13:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.868 19:13:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.868 19:13:15 -- setup/common.sh@18 -- # local node=0 00:04:16.868 19:13:15 -- setup/common.sh@19 -- # local var val 00:04:16.868 19:13:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.868 19:13:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.868 19:13:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.868 19:13:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.868 19:13:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.868 19:13:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.868 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32825856 kB' 'MemFree: 20033268 kB' 'MemUsed: 12792588 kB' 'SwapCached: 0 kB' 'Active: 5783320 kB' 'Inactive: 3677028 kB' 'Active(anon): 5565656 kB' 'Inactive(anon): 0 kB' 'Active(file): 217664 kB' 'Inactive(file): 3677028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9339656 kB' 'Mapped: 80880 kB' 'AnonPages: 123796 kB' 'Shmem: 5444964 kB' 'KernelStack: 7432 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96304 kB' 'Slab: 381392 kB' 'SReclaimable: 96304 kB' 'SUnreclaim: 285088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # continue 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.869 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.869 19:13:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # continue 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.129 19:13:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.129 19:13:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.129 19:13:15 -- setup/common.sh@33 -- # echo 0 00:04:17.129 19:13:15 -- setup/common.sh@33 -- # return 0 00:04:17.129 19:13:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.129 19:13:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.129 19:13:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.129 19:13:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.129 19:13:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.129 node0=1024 expecting 1024 00:04:17.129 19:13:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.129 00:04:17.129 real 0m3.034s 00:04:17.129 user 0m1.268s 00:04:17.129 sys 0m1.698s 00:04:17.129 19:13:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.129 19:13:15 -- common/autotest_common.sh@10 -- # set +x 00:04:17.129 ************************************ 00:04:17.129 END TEST no_shrink_alloc 00:04:17.129 ************************************ 00:04:17.129 19:13:15 -- setup/hugepages.sh@217 -- # clear_hp 00:04:17.129 19:13:15 -- setup/hugepages.sh@37 -- # local node hp 00:04:17.129 19:13:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:17.129 19:13:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.129 19:13:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.129 19:13:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.129 19:13:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.129 19:13:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:17.129 19:13:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.129 19:13:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.129 19:13:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.129 19:13:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.129 19:13:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:17.129 19:13:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:17.129 00:04:17.129 real 0m12.096s 00:04:17.129 user 0m4.700s 00:04:17.129 sys 0m6.353s 00:04:17.129 19:13:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.129 19:13:15 -- common/autotest_common.sh@10 -- # set +x 00:04:17.130 ************************************ 00:04:17.130 END TEST hugepages 00:04:17.130 ************************************ 00:04:17.130 19:13:15 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:17.130 19:13:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.130 19:13:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.130 19:13:15 -- common/autotest_common.sh@10 -- # set +x 00:04:17.130 ************************************ 00:04:17.130 START TEST driver 00:04:17.130 ************************************ 00:04:17.130 19:13:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:17.130 * Looking for test storage... 00:04:17.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:17.130 19:13:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:17.130 19:13:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:17.130 19:13:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:17.130 19:13:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:17.130 19:13:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:17.130 19:13:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:17.130 19:13:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:17.130 19:13:15 -- scripts/common.sh@335 -- # IFS=.-: 00:04:17.130 19:13:15 -- scripts/common.sh@335 -- # read -ra ver1 00:04:17.130 19:13:15 -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.130 19:13:15 -- scripts/common.sh@336 -- # read -ra ver2 00:04:17.130 19:13:15 -- scripts/common.sh@337 -- # local 'op=<' 00:04:17.130 19:13:15 -- scripts/common.sh@339 -- # ver1_l=2 00:04:17.130 19:13:15 -- scripts/common.sh@340 -- # ver2_l=1 00:04:17.130 19:13:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:17.130 19:13:15 -- scripts/common.sh@343 -- # case "$op" in 00:04:17.130 19:13:15 -- scripts/common.sh@344 -- # : 1 00:04:17.130 19:13:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:17.130 19:13:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.130 19:13:15 -- scripts/common.sh@364 -- # decimal 1 00:04:17.130 19:13:15 -- scripts/common.sh@352 -- # local d=1 00:04:17.130 19:13:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.130 19:13:15 -- scripts/common.sh@354 -- # echo 1 00:04:17.130 19:13:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:17.130 19:13:15 -- scripts/common.sh@365 -- # decimal 2 00:04:17.130 19:13:15 -- scripts/common.sh@352 -- # local d=2 00:04:17.130 19:13:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.130 19:13:15 -- scripts/common.sh@354 -- # echo 2 00:04:17.130 19:13:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:17.130 19:13:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:17.130 19:13:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:17.130 19:13:15 -- scripts/common.sh@367 -- # return 0 00:04:17.130 19:13:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.130 19:13:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:17.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.130 --rc genhtml_branch_coverage=1 00:04:17.130 --rc genhtml_function_coverage=1 00:04:17.130 --rc genhtml_legend=1 00:04:17.130 --rc geninfo_all_blocks=1 00:04:17.130 --rc geninfo_unexecuted_blocks=1 00:04:17.130 00:04:17.130 ' 00:04:17.130 19:13:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:17.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.130 --rc genhtml_branch_coverage=1 00:04:17.130 --rc genhtml_function_coverage=1 00:04:17.130 --rc genhtml_legend=1 00:04:17.130 --rc geninfo_all_blocks=1 00:04:17.130 --rc geninfo_unexecuted_blocks=1 00:04:17.130 00:04:17.130 ' 00:04:17.130 19:13:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:17.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.130 --rc genhtml_branch_coverage=1 00:04:17.130 --rc genhtml_function_coverage=1 00:04:17.130 --rc genhtml_legend=1 00:04:17.130 --rc geninfo_all_blocks=1 00:04:17.130 --rc geninfo_unexecuted_blocks=1 00:04:17.130 00:04:17.130 ' 00:04:17.130 19:13:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:17.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.130 --rc genhtml_branch_coverage=1 00:04:17.130 --rc genhtml_function_coverage=1 00:04:17.130 --rc genhtml_legend=1 00:04:17.130 --rc geninfo_all_blocks=1 00:04:17.130 --rc geninfo_unexecuted_blocks=1 00:04:17.130 00:04:17.130 ' 00:04:17.130 19:13:15 -- setup/driver.sh@68 -- # setup reset 00:04:17.130 19:13:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.130 19:13:15 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.423 19:13:18 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.423 19:13:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.423 19:13:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.423 19:13:18 -- common/autotest_common.sh@10 -- # set +x 00:04:20.423 ************************************ 00:04:20.423 START TEST guess_driver 00:04:20.423 ************************************ 00:04:20.423 19:13:18 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:20.423 19:13:18 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.423 19:13:18 -- setup/driver.sh@47 -- # local fail=0 00:04:20.423 19:13:18 -- setup/driver.sh@49 -- # pick_driver 00:04:20.423 19:13:18 -- setup/driver.sh@36 -- # vfio 00:04:20.423 19:13:18 -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.423 19:13:18 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.423 19:13:18 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.423 19:13:18 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:20.423 19:13:18 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.423 19:13:18 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:20.423 19:13:18 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:20.423 19:13:18 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:20.423 19:13:18 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:20.423 19:13:18 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:20.423 19:13:18 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:20.423 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.423 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.423 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.423 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.423 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:20.423 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:20.423 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:20.423 19:13:18 -- setup/driver.sh@30 -- # return 0 00:04:20.423 19:13:18 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:20.423 19:13:18 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:20.423 19:13:18 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.423 19:13:18 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:20.423 Looking for driver=vfio-pci 00:04:20.423 19:13:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.423 19:13:18 -- setup/driver.sh@45 -- # setup output config 00:04:20.423 19:13:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.423 19:13:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.362 19:13:19 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.362 19:13:19 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.362 19:13:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.298 19:13:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:22.298 19:13:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:22.298 19:13:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:22.298 19:13:20 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:22.298 19:13:20 -- setup/driver.sh@65 -- # setup reset 00:04:22.298 19:13:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.298 19:13:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.592 00:04:25.592 real 0m5.105s 00:04:25.592 user 0m1.170s 00:04:25.592 sys 0m2.015s 00:04:25.592 19:13:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.592 19:13:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.592 ************************************ 00:04:25.592 END TEST guess_driver 00:04:25.592 ************************************ 00:04:25.592 00:04:25.592 real 0m7.932s 00:04:25.592 user 0m1.841s 00:04:25.592 sys 0m3.177s 00:04:25.592 19:13:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.592 19:13:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.592 ************************************ 00:04:25.592 END TEST driver 00:04:25.592 ************************************ 00:04:25.592 19:13:23 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:25.592 19:13:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.592 19:13:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.592 19:13:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.592 ************************************ 00:04:25.592 START TEST devices 00:04:25.592 ************************************ 00:04:25.592 19:13:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:25.592 * Looking for test storage... 00:04:25.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.592 19:13:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:25.592 19:13:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:25.592 19:13:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:25.592 19:13:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:25.592 19:13:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:25.592 19:13:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:25.592 19:13:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:25.592 19:13:23 -- scripts/common.sh@335 -- # IFS=.-: 00:04:25.592 19:13:23 -- scripts/common.sh@335 -- # read -ra ver1 00:04:25.592 19:13:23 -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.592 19:13:23 -- scripts/common.sh@336 -- # read -ra ver2 00:04:25.592 19:13:23 -- scripts/common.sh@337 -- # local 'op=<' 00:04:25.592 19:13:23 -- scripts/common.sh@339 -- # ver1_l=2 00:04:25.592 19:13:23 -- scripts/common.sh@340 -- # ver2_l=1 00:04:25.592 19:13:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:25.592 19:13:23 -- scripts/common.sh@343 -- # case "$op" in 00:04:25.592 19:13:23 -- scripts/common.sh@344 -- # : 1 00:04:25.592 19:13:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:25.592 19:13:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.592 19:13:23 -- scripts/common.sh@364 -- # decimal 1 00:04:25.592 19:13:23 -- scripts/common.sh@352 -- # local d=1 00:04:25.592 19:13:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.592 19:13:23 -- scripts/common.sh@354 -- # echo 1 00:04:25.592 19:13:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:25.592 19:13:23 -- scripts/common.sh@365 -- # decimal 2 00:04:25.592 19:13:23 -- scripts/common.sh@352 -- # local d=2 00:04:25.592 19:13:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.592 19:13:23 -- scripts/common.sh@354 -- # echo 2 00:04:25.592 19:13:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:25.592 19:13:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:25.592 19:13:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:25.592 19:13:23 -- scripts/common.sh@367 -- # return 0 00:04:25.592 19:13:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.592 19:13:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:25.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.592 --rc genhtml_branch_coverage=1 00:04:25.592 --rc genhtml_function_coverage=1 00:04:25.592 --rc genhtml_legend=1 00:04:25.592 --rc geninfo_all_blocks=1 00:04:25.592 --rc geninfo_unexecuted_blocks=1 00:04:25.592 00:04:25.592 ' 00:04:25.592 19:13:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:25.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.592 --rc genhtml_branch_coverage=1 00:04:25.592 --rc genhtml_function_coverage=1 00:04:25.592 --rc genhtml_legend=1 00:04:25.592 --rc geninfo_all_blocks=1 00:04:25.592 --rc geninfo_unexecuted_blocks=1 00:04:25.592 00:04:25.592 ' 00:04:25.592 19:13:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:25.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.592 --rc genhtml_branch_coverage=1 00:04:25.592 --rc genhtml_function_coverage=1 00:04:25.592 --rc genhtml_legend=1 00:04:25.592 --rc geninfo_all_blocks=1 00:04:25.592 --rc geninfo_unexecuted_blocks=1 00:04:25.592 00:04:25.592 ' 00:04:25.592 19:13:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:25.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.592 --rc genhtml_branch_coverage=1 00:04:25.592 --rc genhtml_function_coverage=1 00:04:25.592 --rc genhtml_legend=1 00:04:25.592 --rc geninfo_all_blocks=1 00:04:25.592 --rc geninfo_unexecuted_blocks=1 00:04:25.592 00:04:25.592 ' 00:04:25.592 19:13:23 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:25.592 19:13:23 -- setup/devices.sh@192 -- # setup reset 00:04:25.592 19:13:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.592 19:13:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.973 19:13:24 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:26.973 19:13:24 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:26.973 19:13:24 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:26.973 19:13:24 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:26.973 19:13:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.973 19:13:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:26.973 19:13:24 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:26.973 19:13:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.973 19:13:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.973 19:13:24 -- setup/devices.sh@196 -- # blocks=() 00:04:26.973 19:13:24 -- setup/devices.sh@196 -- # declare -a blocks 00:04:26.973 19:13:24 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:26.973 19:13:24 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:26.973 19:13:24 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:26.973 19:13:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.973 19:13:24 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:26.973 19:13:24 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:26.973 19:13:24 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:26.973 19:13:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:26.973 19:13:24 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:26.973 19:13:24 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:26.973 19:13:24 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:26.973 No valid GPT data, bailing 00:04:26.973 19:13:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.973 19:13:24 -- scripts/common.sh@393 -- # pt= 00:04:26.973 19:13:24 -- scripts/common.sh@394 -- # return 1 00:04:26.973 19:13:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:26.973 19:13:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:26.973 19:13:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:26.973 19:13:24 -- setup/common.sh@80 -- # echo 1000204886016 00:04:26.973 19:13:24 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:26.973 19:13:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.973 19:13:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:26.973 19:13:24 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:26.973 19:13:24 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:26.973 19:13:24 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:26.973 19:13:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.973 19:13:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.973 19:13:24 -- common/autotest_common.sh@10 -- # set +x 00:04:26.973 ************************************ 00:04:26.973 START TEST nvme_mount 00:04:26.973 ************************************ 00:04:26.973 19:13:24 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:26.973 19:13:24 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:26.973 19:13:24 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:26.973 19:13:24 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.973 19:13:24 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.973 19:13:24 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:26.973 19:13:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:26.973 19:13:24 -- setup/common.sh@40 -- # local part_no=1 00:04:26.973 19:13:24 -- setup/common.sh@41 -- # local size=1073741824 00:04:26.973 19:13:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.973 19:13:24 -- setup/common.sh@44 -- # parts=() 00:04:26.973 19:13:24 -- setup/common.sh@44 -- # local parts 00:04:26.973 19:13:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.973 19:13:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.973 19:13:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.973 19:13:24 -- setup/common.sh@46 -- # (( part++ )) 00:04:26.973 19:13:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.973 19:13:24 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:26.973 19:13:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:26.973 19:13:24 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:27.912 Creating new GPT entries in memory. 00:04:27.912 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.912 other utilities. 00:04:27.912 19:13:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.912 19:13:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.912 19:13:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.912 19:13:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.912 19:13:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.850 Creating new GPT entries in memory. 00:04:28.850 The operation has completed successfully. 00:04:28.850 19:13:26 -- setup/common.sh@57 -- # (( part++ )) 00:04:28.850 19:13:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.850 19:13:26 -- setup/common.sh@62 -- # wait 1063087 00:04:28.850 19:13:27 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.850 19:13:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:28.850 19:13:27 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.850 19:13:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:28.850 19:13:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:28.850 19:13:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.850 19:13:27 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.850 19:13:27 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:28.850 19:13:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:28.850 19:13:27 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.850 19:13:27 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.850 19:13:27 -- setup/devices.sh@53 -- # local found=0 00:04:28.850 19:13:27 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.850 19:13:27 -- setup/devices.sh@56 -- # : 00:04:28.850 19:13:27 -- setup/devices.sh@59 -- # local pci status 00:04:28.850 19:13:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.850 19:13:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:28.850 19:13:27 -- setup/devices.sh@47 -- # setup output config 00:04:28.850 19:13:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.850 19:13:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:30.226 19:13:28 -- setup/devices.sh@63 -- # found=1 00:04:30.226 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.226 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.226 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.226 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.226 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.226 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.226 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.226 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.226 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.226 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.227 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.227 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.227 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.227 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.227 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.227 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.227 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.227 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.227 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.227 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.227 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.227 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.227 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.227 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.227 19:13:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.227 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.227 19:13:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.227 19:13:28 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.227 19:13:28 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.227 19:13:28 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.227 19:13:28 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.227 19:13:28 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:30.227 19:13:28 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.227 19:13:28 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.486 19:13:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.486 19:13:28 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:30.486 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.486 19:13:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.486 19:13:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.745 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:30.745 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:30.745 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.745 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.745 19:13:28 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:30.745 19:13:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:30.745 19:13:28 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.745 19:13:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:30.745 19:13:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:30.745 19:13:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.745 19:13:28 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.745 19:13:28 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:30.745 19:13:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:30.745 19:13:28 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.745 19:13:28 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.745 19:13:28 -- setup/devices.sh@53 -- # local found=0 00:04:30.745 19:13:28 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.745 19:13:28 -- setup/devices.sh@56 -- # : 00:04:30.745 19:13:28 -- setup/devices.sh@59 -- # local pci status 00:04:30.745 19:13:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.745 19:13:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:30.745 19:13:28 -- setup/devices.sh@47 -- # setup output config 00:04:30.745 19:13:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.745 19:13:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:31.680 19:13:29 -- setup/devices.sh@63 -- # found=1 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.680 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.680 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.939 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.939 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.939 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.939 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.939 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.939 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.939 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.939 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.939 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.939 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.939 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.939 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.939 19:13:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.939 19:13:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.939 19:13:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.939 19:13:30 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:31.939 19:13:30 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.939 19:13:30 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.939 19:13:30 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.939 19:13:30 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.939 19:13:30 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:31.939 19:13:30 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:31.939 19:13:30 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:31.939 19:13:30 -- setup/devices.sh@50 -- # local mount_point= 00:04:31.939 19:13:30 -- setup/devices.sh@51 -- # local test_file= 00:04:31.939 19:13:30 -- setup/devices.sh@53 -- # local found=0 00:04:31.939 19:13:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:31.939 19:13:30 -- setup/devices.sh@59 -- # local pci status 00:04:31.939 19:13:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.939 19:13:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:31.939 19:13:30 -- setup/devices.sh@47 -- # setup output config 00:04:31.939 19:13:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.940 19:13:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:33.316 19:13:31 -- setup/devices.sh@63 -- # found=1 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.316 19:13:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:33.316 19:13:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.576 19:13:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.576 19:13:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.576 19:13:31 -- setup/devices.sh@68 -- # return 0 00:04:33.576 19:13:31 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:33.576 19:13:31 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.576 19:13:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.576 19:13:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.576 19:13:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.576 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.576 00:04:33.576 real 0m6.661s 00:04:33.576 user 0m1.549s 00:04:33.576 sys 0m2.718s 00:04:33.576 19:13:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.576 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:04:33.576 ************************************ 00:04:33.576 END TEST nvme_mount 00:04:33.576 ************************************ 00:04:33.576 19:13:31 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:33.576 19:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.576 19:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.576 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:04:33.576 ************************************ 00:04:33.576 START TEST dm_mount 00:04:33.576 ************************************ 00:04:33.576 19:13:31 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:33.576 19:13:31 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:33.576 19:13:31 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:33.576 19:13:31 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:33.576 19:13:31 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:33.576 19:13:31 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.576 19:13:31 -- setup/common.sh@40 -- # local part_no=2 00:04:33.576 19:13:31 -- setup/common.sh@41 -- # local size=1073741824 00:04:33.576 19:13:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.576 19:13:31 -- setup/common.sh@44 -- # parts=() 00:04:33.576 19:13:31 -- setup/common.sh@44 -- # local parts 00:04:33.576 19:13:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.576 19:13:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.576 19:13:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.576 19:13:31 -- setup/common.sh@46 -- # (( part++ )) 00:04:33.576 19:13:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.576 19:13:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.576 19:13:31 -- setup/common.sh@46 -- # (( part++ )) 00:04:33.576 19:13:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.576 19:13:31 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:33.576 19:13:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.576 19:13:31 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:34.561 Creating new GPT entries in memory. 00:04:34.561 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.561 other utilities. 00:04:34.561 19:13:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.561 19:13:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.561 19:13:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.561 19:13:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.561 19:13:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:35.500 Creating new GPT entries in memory. 00:04:35.500 The operation has completed successfully. 00:04:35.500 19:13:33 -- setup/common.sh@57 -- # (( part++ )) 00:04:35.500 19:13:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.500 19:13:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.500 19:13:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.500 19:13:33 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:36.437 The operation has completed successfully. 00:04:36.437 19:13:34 -- setup/common.sh@57 -- # (( part++ )) 00:04:36.437 19:13:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.437 19:13:34 -- setup/common.sh@62 -- # wait 1065549 00:04:36.696 19:13:34 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:36.696 19:13:34 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.696 19:13:34 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.696 19:13:34 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:36.696 19:13:34 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:36.696 19:13:34 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.696 19:13:34 -- setup/devices.sh@161 -- # break 00:04:36.696 19:13:34 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.696 19:13:34 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:36.696 19:13:34 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:36.696 19:13:34 -- setup/devices.sh@166 -- # dm=dm-0 00:04:36.696 19:13:34 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:36.696 19:13:34 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:36.696 19:13:34 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.696 19:13:34 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:36.696 19:13:34 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.696 19:13:34 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.696 19:13:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:36.696 19:13:34 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.696 19:13:34 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.696 19:13:34 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:36.696 19:13:34 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:36.696 19:13:34 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.696 19:13:34 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.696 19:13:34 -- setup/devices.sh@53 -- # local found=0 00:04:36.696 19:13:34 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.696 19:13:34 -- setup/devices.sh@56 -- # : 00:04:36.696 19:13:34 -- setup/devices.sh@59 -- # local pci status 00:04:36.696 19:13:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.696 19:13:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:36.696 19:13:34 -- setup/devices.sh@47 -- # setup output config 00:04:36.696 19:13:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.696 19:13:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:38.074 19:13:35 -- setup/devices.sh@63 -- # found=1 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.074 19:13:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.074 19:13:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.074 19:13:36 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:38.075 19:13:36 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.075 19:13:36 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.075 19:13:36 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.075 19:13:36 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.075 19:13:36 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:38.075 19:13:36 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:38.075 19:13:36 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:38.075 19:13:36 -- setup/devices.sh@50 -- # local mount_point= 00:04:38.075 19:13:36 -- setup/devices.sh@51 -- # local test_file= 00:04:38.075 19:13:36 -- setup/devices.sh@53 -- # local found=0 00:04:38.075 19:13:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.075 19:13:36 -- setup/devices.sh@59 -- # local pci status 00:04:38.075 19:13:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.075 19:13:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:38.075 19:13:36 -- setup/devices.sh@47 -- # setup output config 00:04:38.075 19:13:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.075 19:13:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:39.452 19:13:37 -- setup/devices.sh@63 -- # found=1 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.452 19:13:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.452 19:13:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.452 19:13:37 -- setup/devices.sh@68 -- # return 0 00:04:39.452 19:13:37 -- setup/devices.sh@187 -- # cleanup_dm 00:04:39.452 19:13:37 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.452 19:13:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.452 19:13:37 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:39.452 19:13:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:39.452 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.452 19:13:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:39.452 00:04:39.452 real 0m6.051s 00:04:39.452 user 0m1.069s 00:04:39.452 sys 0m1.863s 00:04:39.452 19:13:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.452 19:13:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.452 ************************************ 00:04:39.452 END TEST dm_mount 00:04:39.452 ************************************ 00:04:39.452 19:13:37 -- setup/devices.sh@1 -- # cleanup 00:04:39.452 19:13:37 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:39.452 19:13:37 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.452 19:13:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.452 19:13:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:39.452 19:13:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.453 19:13:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.711 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:39.711 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:39.711 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.711 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.711 19:13:37 -- setup/devices.sh@12 -- # cleanup_dm 00:04:39.711 19:13:37 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.711 19:13:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.711 19:13:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.711 19:13:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.711 19:13:37 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.711 19:13:37 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:39.711 00:04:39.711 real 0m14.817s 00:04:39.711 user 0m3.374s 00:04:39.711 sys 0m5.707s 00:04:39.711 19:13:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.711 19:13:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.711 ************************************ 00:04:39.711 END TEST devices 00:04:39.711 ************************************ 00:04:39.970 00:04:39.970 real 0m46.161s 00:04:39.970 user 0m13.500s 00:04:39.970 sys 0m21.039s 00:04:39.970 19:13:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.970 19:13:38 -- common/autotest_common.sh@10 -- # set +x 00:04:39.970 ************************************ 00:04:39.970 END TEST setup.sh 00:04:39.970 ************************************ 00:04:39.970 19:13:38 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:40.907 Hugepages 00:04:40.907 node hugesize free / total 00:04:40.907 node0 1048576kB 0 / 0 00:04:40.907 node0 2048kB 2048 / 2048 00:04:40.907 node1 1048576kB 0 / 0 00:04:40.907 node1 2048kB 0 / 0 00:04:40.907 00:04:40.907 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.907 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:40.907 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:40.907 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:40.907 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:40.907 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:40.907 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:40.907 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:40.907 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:40.907 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:40.907 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:40.907 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:40.907 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:40.907 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:40.907 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:41.165 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:41.165 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:41.165 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:41.165 19:13:39 -- spdk/autotest.sh@128 -- # uname -s 00:04:41.165 19:13:39 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:41.165 19:13:39 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:41.165 19:13:39 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.539 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.540 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.540 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.540 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.540 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.540 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.540 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.540 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:42.540 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.540 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.540 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.540 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.540 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.540 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.540 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.540 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:43.478 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:43.478 19:13:41 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:44.859 19:13:42 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:44.859 19:13:42 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:44.859 19:13:42 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:44.859 19:13:42 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:44.859 19:13:42 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:44.859 19:13:42 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:44.859 19:13:42 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.859 19:13:42 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:44.859 19:13:42 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:44.859 19:13:42 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:44.859 19:13:42 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:88:00.0 00:04:44.859 19:13:42 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.797 Waiting for block devices as requested 00:04:45.797 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:46.056 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:46.056 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:46.056 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:46.314 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:46.314 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:46.314 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:46.314 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:46.573 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:46.573 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:46.573 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:46.573 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:46.831 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:46.831 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:46.831 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:46.831 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:47.091 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:47.091 19:13:45 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:47.091 19:13:45 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:47.091 19:13:45 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:04:47.091 19:13:45 -- common/autotest_common.sh@1497 -- # grep 0000:88:00.0/nvme/nvme 00:04:47.091 19:13:45 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:47.091 19:13:45 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:47.091 19:13:45 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:47.091 19:13:45 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:47.091 19:13:45 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:47.091 19:13:45 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:47.091 19:13:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:47.091 19:13:45 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:47.091 19:13:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:47.091 19:13:45 -- common/autotest_common.sh@1540 -- # oacs=' 0xf' 00:04:47.091 19:13:45 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:47.091 19:13:45 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:47.091 19:13:45 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:47.091 19:13:45 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:47.091 19:13:45 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:47.091 19:13:45 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:47.091 19:13:45 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:47.091 19:13:45 -- common/autotest_common.sh@1552 -- # continue 00:04:47.091 19:13:45 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:47.091 19:13:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:47.091 19:13:45 -- common/autotest_common.sh@10 -- # set +x 00:04:47.091 19:13:45 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:47.091 19:13:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.091 19:13:45 -- common/autotest_common.sh@10 -- # set +x 00:04:47.091 19:13:45 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.469 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.469 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.469 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.469 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.469 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.469 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.469 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.469 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:48.469 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:48.469 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:48.729 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:48.729 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:48.729 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:48.729 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:48.729 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:48.729 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.666 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.666 19:13:47 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:49.666 19:13:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.666 19:13:47 -- common/autotest_common.sh@10 -- # set +x 00:04:49.666 19:13:47 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:49.666 19:13:47 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:49.666 19:13:47 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:49.666 19:13:47 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:49.666 19:13:47 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:49.666 19:13:47 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:49.666 19:13:47 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:49.666 19:13:47 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:49.666 19:13:47 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.666 19:13:47 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:49.666 19:13:47 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:49.666 19:13:47 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:49.666 19:13:47 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:88:00.0 00:04:49.666 19:13:47 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:49.666 19:13:47 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:49.666 19:13:47 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:04:49.666 19:13:47 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:49.666 19:13:47 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:04:49.666 19:13:47 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:88:00.0 00:04:49.666 19:13:47 -- common/autotest_common.sh@1587 -- # [[ -z 0000:88:00.0 ]] 00:04:49.666 19:13:47 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=1070985 00:04:49.666 19:13:47 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.666 19:13:47 -- common/autotest_common.sh@1593 -- # waitforlisten 1070985 00:04:49.666 19:13:47 -- common/autotest_common.sh@829 -- # '[' -z 1070985 ']' 00:04:49.666 19:13:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.666 19:13:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.666 19:13:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.666 19:13:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.666 19:13:47 -- common/autotest_common.sh@10 -- # set +x 00:04:49.924 [2024-11-17 19:13:47.977358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:04:49.924 [2024-11-17 19:13:47.977437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070985 ] 00:04:49.924 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.924 [2024-11-17 19:13:48.036161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.924 [2024-11-17 19:13:48.124523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:49.924 [2024-11-17 19:13:48.124723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.859 19:13:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.859 19:13:48 -- common/autotest_common.sh@862 -- # return 0 00:04:50.859 19:13:48 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:04:50.859 19:13:48 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:04:50.859 19:13:48 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:54.147 nvme0n1 00:04:54.147 19:13:52 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:54.147 [2024-11-17 19:13:52.247567] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:54.147 [2024-11-17 19:13:52.247625] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:54.147 request: 00:04:54.147 { 00:04:54.147 "nvme_ctrlr_name": "nvme0", 00:04:54.147 "password": "test", 00:04:54.147 "method": "bdev_nvme_opal_revert", 00:04:54.147 "req_id": 1 00:04:54.147 } 00:04:54.147 Got JSON-RPC error response 00:04:54.147 response: 00:04:54.147 { 00:04:54.147 "code": -32603, 00:04:54.147 "message": "Internal error" 00:04:54.147 } 00:04:54.147 19:13:52 -- common/autotest_common.sh@1599 -- # true 00:04:54.147 19:13:52 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:04:54.147 19:13:52 -- common/autotest_common.sh@1603 -- # killprocess 1070985 00:04:54.147 19:13:52 -- common/autotest_common.sh@936 -- # '[' -z 1070985 ']' 00:04:54.147 19:13:52 -- common/autotest_common.sh@940 -- # kill -0 1070985 00:04:54.147 19:13:52 -- common/autotest_common.sh@941 -- # uname 00:04:54.147 19:13:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:54.147 19:13:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1070985 00:04:54.147 19:13:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:54.147 19:13:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:54.147 19:13:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1070985' 00:04:54.147 killing process with pid 1070985 00:04:54.147 19:13:52 -- common/autotest_common.sh@955 -- # kill 1070985 00:04:54.147 19:13:52 -- common/autotest_common.sh@960 -- # wait 1070985 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.147 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.148 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.406 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:54.407 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:56.308 19:13:54 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:56.308 19:13:54 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:56.308 19:13:54 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:56.308 19:13:54 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:56.308 19:13:54 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:56.308 19:13:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.308 19:13:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.308 19:13:54 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:56.308 19:13:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.308 19:13:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.308 19:13:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.308 ************************************ 00:04:56.308 START TEST env 00:04:56.308 ************************************ 00:04:56.308 19:13:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:56.308 * Looking for test storage... 00:04:56.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:56.308 19:13:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:56.308 19:13:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:56.308 19:13:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:56.308 19:13:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:56.308 19:13:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:56.308 19:13:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:56.308 19:13:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:56.308 19:13:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:56.308 19:13:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:56.308 19:13:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.308 19:13:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:56.308 19:13:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:56.308 19:13:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:56.308 19:13:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:56.308 19:13:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:56.308 19:13:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:56.308 19:13:54 -- scripts/common.sh@344 -- # : 1 00:04:56.308 19:13:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:56.308 19:13:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.308 19:13:54 -- scripts/common.sh@364 -- # decimal 1 00:04:56.308 19:13:54 -- scripts/common.sh@352 -- # local d=1 00:04:56.308 19:13:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.308 19:13:54 -- scripts/common.sh@354 -- # echo 1 00:04:56.308 19:13:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:56.308 19:13:54 -- scripts/common.sh@365 -- # decimal 2 00:04:56.308 19:13:54 -- scripts/common.sh@352 -- # local d=2 00:04:56.308 19:13:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.308 19:13:54 -- scripts/common.sh@354 -- # echo 2 00:04:56.308 19:13:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:56.308 19:13:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:56.308 19:13:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:56.308 19:13:54 -- scripts/common.sh@367 -- # return 0 00:04:56.308 19:13:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.308 19:13:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:56.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.308 --rc genhtml_branch_coverage=1 00:04:56.308 --rc genhtml_function_coverage=1 00:04:56.308 --rc genhtml_legend=1 00:04:56.308 --rc geninfo_all_blocks=1 00:04:56.308 --rc geninfo_unexecuted_blocks=1 00:04:56.308 00:04:56.308 ' 00:04:56.308 19:13:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:56.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.308 --rc genhtml_branch_coverage=1 00:04:56.308 --rc genhtml_function_coverage=1 00:04:56.308 --rc genhtml_legend=1 00:04:56.308 --rc geninfo_all_blocks=1 00:04:56.308 --rc geninfo_unexecuted_blocks=1 00:04:56.308 00:04:56.308 ' 00:04:56.308 19:13:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:56.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.308 --rc genhtml_branch_coverage=1 00:04:56.308 --rc genhtml_function_coverage=1 00:04:56.308 --rc genhtml_legend=1 00:04:56.308 --rc geninfo_all_blocks=1 00:04:56.308 --rc geninfo_unexecuted_blocks=1 00:04:56.308 00:04:56.308 ' 00:04:56.308 19:13:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:56.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.308 --rc genhtml_branch_coverage=1 00:04:56.308 --rc genhtml_function_coverage=1 00:04:56.308 --rc genhtml_legend=1 00:04:56.308 --rc geninfo_all_blocks=1 00:04:56.308 --rc geninfo_unexecuted_blocks=1 00:04:56.308 00:04:56.308 ' 00:04:56.308 19:13:54 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:56.308 19:13:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.308 19:13:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.308 19:13:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.308 ************************************ 00:04:56.308 START TEST env_memory 00:04:56.308 ************************************ 00:04:56.308 19:13:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:56.308 00:04:56.308 00:04:56.308 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.308 http://cunit.sourceforge.net/ 00:04:56.308 00:04:56.308 00:04:56.308 Suite: memory 00:04:56.308 Test: alloc and free memory map ...[2024-11-17 19:13:54.237539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:56.308 passed 00:04:56.308 Test: mem map translation ...[2024-11-17 19:13:54.258538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:56.308 [2024-11-17 19:13:54.258564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:56.308 [2024-11-17 19:13:54.258619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:56.308 [2024-11-17 19:13:54.258642] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:56.308 passed 00:04:56.308 Test: mem map registration ...[2024-11-17 19:13:54.300918] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:56.308 [2024-11-17 19:13:54.300946] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:56.308 passed 00:04:56.308 Test: mem map adjacent registrations ...passed 00:04:56.308 00:04:56.308 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.308 suites 1 1 n/a 0 0 00:04:56.308 tests 4 4 4 0 0 00:04:56.308 asserts 152 152 152 0 n/a 00:04:56.308 00:04:56.308 Elapsed time = 0.143 seconds 00:04:56.308 00:04:56.308 real 0m0.151s 00:04:56.308 user 0m0.145s 00:04:56.308 sys 0m0.005s 00:04:56.308 19:13:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.308 19:13:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.308 ************************************ 00:04:56.308 END TEST env_memory 00:04:56.308 ************************************ 00:04:56.308 19:13:54 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:56.308 19:13:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.308 19:13:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.308 19:13:54 -- common/autotest_common.sh@10 -- # set +x 00:04:56.308 ************************************ 00:04:56.308 START TEST env_vtophys 00:04:56.308 ************************************ 00:04:56.308 19:13:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:56.308 EAL: lib.eal log level changed from notice to debug 00:04:56.308 EAL: Detected lcore 0 as core 0 on socket 0 00:04:56.308 EAL: Detected lcore 1 as core 1 on socket 0 00:04:56.308 EAL: Detected lcore 2 as core 2 on socket 0 00:04:56.308 EAL: Detected lcore 3 as core 3 on socket 0 00:04:56.308 EAL: Detected lcore 4 as core 4 on socket 0 00:04:56.308 EAL: Detected lcore 5 as core 5 on socket 0 00:04:56.308 EAL: Detected lcore 6 as core 8 on socket 0 00:04:56.308 EAL: Detected lcore 7 as core 9 on socket 0 00:04:56.308 EAL: Detected lcore 8 as core 10 on socket 0 00:04:56.309 EAL: Detected lcore 9 as core 11 on socket 0 00:04:56.309 EAL: Detected lcore 10 as core 12 on socket 0 00:04:56.309 EAL: Detected lcore 11 as core 13 on socket 0 00:04:56.309 EAL: Detected lcore 12 as core 0 on socket 1 00:04:56.309 EAL: Detected lcore 13 as core 1 on socket 1 00:04:56.309 EAL: Detected lcore 14 as core 2 on socket 1 00:04:56.309 EAL: Detected lcore 15 as core 3 on socket 1 00:04:56.309 EAL: Detected lcore 16 as core 4 on socket 1 00:04:56.309 EAL: Detected lcore 17 as core 5 on socket 1 00:04:56.309 EAL: Detected lcore 18 as core 8 on socket 1 00:04:56.309 EAL: Detected lcore 19 as core 9 on socket 1 00:04:56.309 EAL: Detected lcore 20 as core 10 on socket 1 00:04:56.309 EAL: Detected lcore 21 as core 11 on socket 1 00:04:56.309 EAL: Detected lcore 22 as core 12 on socket 1 00:04:56.309 EAL: Detected lcore 23 as core 13 on socket 1 00:04:56.309 EAL: Detected lcore 24 as core 0 on socket 0 00:04:56.309 EAL: Detected lcore 25 as core 1 on socket 0 00:04:56.309 EAL: Detected lcore 26 as core 2 on socket 0 00:04:56.309 EAL: Detected lcore 27 as core 3 on socket 0 00:04:56.309 EAL: Detected lcore 28 as core 4 on socket 0 00:04:56.309 EAL: Detected lcore 29 as core 5 on socket 0 00:04:56.309 EAL: Detected lcore 30 as core 8 on socket 0 00:04:56.309 EAL: Detected lcore 31 as core 9 on socket 0 00:04:56.309 EAL: Detected lcore 32 as core 10 on socket 0 00:04:56.309 EAL: Detected lcore 33 as core 11 on socket 0 00:04:56.309 EAL: Detected lcore 34 as core 12 on socket 0 00:04:56.309 EAL: Detected lcore 35 as core 13 on socket 0 00:04:56.309 EAL: Detected lcore 36 as core 0 on socket 1 00:04:56.309 EAL: Detected lcore 37 as core 1 on socket 1 00:04:56.309 EAL: Detected lcore 38 as core 2 on socket 1 00:04:56.309 EAL: Detected lcore 39 as core 3 on socket 1 00:04:56.309 EAL: Detected lcore 40 as core 4 on socket 1 00:04:56.309 EAL: Detected lcore 41 as core 5 on socket 1 00:04:56.309 EAL: Detected lcore 42 as core 8 on socket 1 00:04:56.309 EAL: Detected lcore 43 as core 9 on socket 1 00:04:56.309 EAL: Detected lcore 44 as core 10 on socket 1 00:04:56.309 EAL: Detected lcore 45 as core 11 on socket 1 00:04:56.309 EAL: Detected lcore 46 as core 12 on socket 1 00:04:56.309 EAL: Detected lcore 47 as core 13 on socket 1 00:04:56.309 EAL: Maximum logical cores by configuration: 128 00:04:56.309 EAL: Detected CPU lcores: 48 00:04:56.309 EAL: Detected NUMA nodes: 2 00:04:56.309 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:56.309 EAL: Detected shared linkage of DPDK 00:04:56.309 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:56.309 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:56.309 EAL: Registered [vdev] bus. 00:04:56.309 EAL: bus.vdev log level changed from disabled to notice 00:04:56.309 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:56.309 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:56.309 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:56.309 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:56.309 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:56.309 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:56.309 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:56.309 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:56.309 EAL: No shared files mode enabled, IPC will be disabled 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: Bus pci wants IOVA as 'DC' 00:04:56.309 EAL: Bus vdev wants IOVA as 'DC' 00:04:56.309 EAL: Buses did not request a specific IOVA mode. 00:04:56.309 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:56.309 EAL: Selected IOVA mode 'VA' 00:04:56.309 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.309 EAL: Probing VFIO support... 00:04:56.309 EAL: IOMMU type 1 (Type 1) is supported 00:04:56.309 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:56.309 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:56.309 EAL: VFIO support initialized 00:04:56.309 EAL: Ask a virtual area of 0x2e000 bytes 00:04:56.309 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:56.309 EAL: Setting up physically contiguous memory... 00:04:56.309 EAL: Setting maximum number of open files to 524288 00:04:56.309 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:56.309 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:56.309 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:56.309 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.309 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:56.309 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.309 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.309 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:56.309 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:56.309 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.309 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:56.309 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.309 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.309 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:56.309 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:56.309 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.309 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:56.309 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.309 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.309 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:56.309 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:56.309 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.309 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:56.309 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.309 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.309 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:56.309 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:56.309 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:56.309 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.309 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:56.309 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.309 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.309 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:56.309 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:56.309 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.309 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:56.309 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.309 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.309 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:56.309 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:56.309 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.309 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:56.309 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.309 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.309 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:56.309 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:56.309 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.309 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:56.309 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.309 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.309 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:56.309 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:56.309 EAL: Hugepages will be freed exactly as allocated. 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: TSC frequency is ~2700000 KHz 00:04:56.309 EAL: Main lcore 0 is ready (tid=7f36b63c0a00;cpuset=[0]) 00:04:56.309 EAL: Trying to obtain current memory policy. 00:04:56.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.309 EAL: Restoring previous memory policy: 0 00:04:56.309 EAL: request: mp_malloc_sync 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: Heap on socket 0 was expanded by 2MB 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:56.309 EAL: Mem event callback 'spdk:(nil)' registered 00:04:56.309 00:04:56.309 00:04:56.309 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.309 http://cunit.sourceforge.net/ 00:04:56.309 00:04:56.309 00:04:56.309 Suite: components_suite 00:04:56.309 Test: vtophys_malloc_test ...passed 00:04:56.309 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:56.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.309 EAL: Restoring previous memory policy: 4 00:04:56.309 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.309 EAL: request: mp_malloc_sync 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: Heap on socket 0 was expanded by 4MB 00:04:56.309 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.309 EAL: request: mp_malloc_sync 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: Heap on socket 0 was shrunk by 4MB 00:04:56.309 EAL: Trying to obtain current memory policy. 00:04:56.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.309 EAL: Restoring previous memory policy: 4 00:04:56.309 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.309 EAL: request: mp_malloc_sync 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: Heap on socket 0 was expanded by 6MB 00:04:56.309 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.309 EAL: request: mp_malloc_sync 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: Heap on socket 0 was shrunk by 6MB 00:04:56.309 EAL: Trying to obtain current memory policy. 00:04:56.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.309 EAL: Restoring previous memory policy: 4 00:04:56.309 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.309 EAL: request: mp_malloc_sync 00:04:56.309 EAL: No shared files mode enabled, IPC is disabled 00:04:56.309 EAL: Heap on socket 0 was expanded by 10MB 00:04:56.310 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.310 EAL: request: mp_malloc_sync 00:04:56.310 EAL: No shared files mode enabled, IPC is disabled 00:04:56.310 EAL: Heap on socket 0 was shrunk by 10MB 00:04:56.310 EAL: Trying to obtain current memory policy. 00:04:56.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.310 EAL: Restoring previous memory policy: 4 00:04:56.310 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.310 EAL: request: mp_malloc_sync 00:04:56.310 EAL: No shared files mode enabled, IPC is disabled 00:04:56.310 EAL: Heap on socket 0 was expanded by 18MB 00:04:56.310 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.310 EAL: request: mp_malloc_sync 00:04:56.310 EAL: No shared files mode enabled, IPC is disabled 00:04:56.310 EAL: Heap on socket 0 was shrunk by 18MB 00:04:56.310 EAL: Trying to obtain current memory policy. 00:04:56.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.310 EAL: Restoring previous memory policy: 4 00:04:56.310 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.310 EAL: request: mp_malloc_sync 00:04:56.310 EAL: No shared files mode enabled, IPC is disabled 00:04:56.310 EAL: Heap on socket 0 was expanded by 34MB 00:04:56.310 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.310 EAL: request: mp_malloc_sync 00:04:56.310 EAL: No shared files mode enabled, IPC is disabled 00:04:56.310 EAL: Heap on socket 0 was shrunk by 34MB 00:04:56.310 EAL: Trying to obtain current memory policy. 00:04:56.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.310 EAL: Restoring previous memory policy: 4 00:04:56.310 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.310 EAL: request: mp_malloc_sync 00:04:56.310 EAL: No shared files mode enabled, IPC is disabled 00:04:56.310 EAL: Heap on socket 0 was expanded by 66MB 00:04:56.310 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.310 EAL: request: mp_malloc_sync 00:04:56.310 EAL: No shared files mode enabled, IPC is disabled 00:04:56.310 EAL: Heap on socket 0 was shrunk by 66MB 00:04:56.310 EAL: Trying to obtain current memory policy. 00:04:56.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.310 EAL: Restoring previous memory policy: 4 00:04:56.310 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.310 EAL: request: mp_malloc_sync 00:04:56.310 EAL: No shared files mode enabled, IPC is disabled 00:04:56.310 EAL: Heap on socket 0 was expanded by 130MB 00:04:56.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.568 EAL: request: mp_malloc_sync 00:04:56.568 EAL: No shared files mode enabled, IPC is disabled 00:04:56.568 EAL: Heap on socket 0 was shrunk by 130MB 00:04:56.568 EAL: Trying to obtain current memory policy. 00:04:56.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.568 EAL: Restoring previous memory policy: 4 00:04:56.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.568 EAL: request: mp_malloc_sync 00:04:56.568 EAL: No shared files mode enabled, IPC is disabled 00:04:56.568 EAL: Heap on socket 0 was expanded by 258MB 00:04:56.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.568 EAL: request: mp_malloc_sync 00:04:56.568 EAL: No shared files mode enabled, IPC is disabled 00:04:56.568 EAL: Heap on socket 0 was shrunk by 258MB 00:04:56.568 EAL: Trying to obtain current memory policy. 00:04:56.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.826 EAL: Restoring previous memory policy: 4 00:04:56.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.826 EAL: request: mp_malloc_sync 00:04:56.826 EAL: No shared files mode enabled, IPC is disabled 00:04:56.826 EAL: Heap on socket 0 was expanded by 514MB 00:04:56.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.084 EAL: request: mp_malloc_sync 00:04:57.084 EAL: No shared files mode enabled, IPC is disabled 00:04:57.084 EAL: Heap on socket 0 was shrunk by 514MB 00:04:57.084 EAL: Trying to obtain current memory policy. 00:04:57.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.342 EAL: Restoring previous memory policy: 4 00:04:57.342 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.342 EAL: request: mp_malloc_sync 00:04:57.342 EAL: No shared files mode enabled, IPC is disabled 00:04:57.342 EAL: Heap on socket 0 was expanded by 1026MB 00:04:57.342 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.601 EAL: request: mp_malloc_sync 00:04:57.601 EAL: No shared files mode enabled, IPC is disabled 00:04:57.601 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:57.601 passed 00:04:57.601 00:04:57.601 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.602 suites 1 1 n/a 0 0 00:04:57.602 tests 2 2 2 0 0 00:04:57.602 asserts 497 497 497 0 n/a 00:04:57.602 00:04:57.602 Elapsed time = 1.312 seconds 00:04:57.602 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.602 EAL: request: mp_malloc_sync 00:04:57.602 EAL: No shared files mode enabled, IPC is disabled 00:04:57.602 EAL: Heap on socket 0 was shrunk by 2MB 00:04:57.602 EAL: No shared files mode enabled, IPC is disabled 00:04:57.602 EAL: No shared files mode enabled, IPC is disabled 00:04:57.602 EAL: No shared files mode enabled, IPC is disabled 00:04:57.602 00:04:57.602 real 0m1.431s 00:04:57.602 user 0m0.830s 00:04:57.602 sys 0m0.565s 00:04:57.602 19:13:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.602 19:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:57.602 ************************************ 00:04:57.602 END TEST env_vtophys 00:04:57.602 ************************************ 00:04:57.602 19:13:55 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:57.602 19:13:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.602 19:13:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.602 19:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:57.602 ************************************ 00:04:57.602 START TEST env_pci 00:04:57.602 ************************************ 00:04:57.602 19:13:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:57.602 00:04:57.602 00:04:57.602 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.602 http://cunit.sourceforge.net/ 00:04:57.602 00:04:57.602 00:04:57.602 Suite: pci 00:04:57.602 Test: pci_hook ...[2024-11-17 19:13:55.852651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1072028 has claimed it 00:04:57.862 EAL: Cannot find device (10000:00:01.0) 00:04:57.862 EAL: Failed to attach device on primary process 00:04:57.862 passed 00:04:57.862 00:04:57.862 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.862 suites 1 1 n/a 0 0 00:04:57.862 tests 1 1 1 0 0 00:04:57.862 asserts 25 25 25 0 n/a 00:04:57.862 00:04:57.862 Elapsed time = 0.022 seconds 00:04:57.862 00:04:57.862 real 0m0.034s 00:04:57.862 user 0m0.011s 00:04:57.862 sys 0m0.023s 00:04:57.862 19:13:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.862 19:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:57.862 ************************************ 00:04:57.862 END TEST env_pci 00:04:57.862 ************************************ 00:04:57.862 19:13:55 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:57.862 19:13:55 -- env/env.sh@15 -- # uname 00:04:57.862 19:13:55 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:57.862 19:13:55 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:57.862 19:13:55 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:57.862 19:13:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:57.862 19:13:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.862 19:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:57.862 ************************************ 00:04:57.862 START TEST env_dpdk_post_init 00:04:57.862 ************************************ 00:04:57.862 19:13:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:57.862 EAL: Detected CPU lcores: 48 00:04:57.862 EAL: Detected NUMA nodes: 2 00:04:57.862 EAL: Detected shared linkage of DPDK 00:04:57.862 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:57.862 EAL: Selected IOVA mode 'VA' 00:04:57.862 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.862 EAL: VFIO support initialized 00:04:57.862 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:57.862 EAL: Using IOMMU type 1 (Type 1) 00:04:57.862 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:57.862 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:57.862 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:57.862 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:57.862 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:57.862 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:57.862 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:57.862 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:57.862 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:58.121 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:58.121 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:58.121 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:58.121 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:58.121 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:58.121 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:58.121 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:58.691 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:01.972 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:01.972 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:02.230 Starting DPDK initialization... 00:05:02.230 Starting SPDK post initialization... 00:05:02.230 SPDK NVMe probe 00:05:02.230 Attaching to 0000:88:00.0 00:05:02.230 Attached to 0000:88:00.0 00:05:02.230 Cleaning up... 00:05:02.230 00:05:02.230 real 0m4.398s 00:05:02.230 user 0m3.278s 00:05:02.230 sys 0m0.174s 00:05:02.230 19:14:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.230 19:14:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.230 ************************************ 00:05:02.230 END TEST env_dpdk_post_init 00:05:02.230 ************************************ 00:05:02.231 19:14:00 -- env/env.sh@26 -- # uname 00:05:02.231 19:14:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:02.231 19:14:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:02.231 19:14:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.231 19:14:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.231 19:14:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.231 ************************************ 00:05:02.231 START TEST env_mem_callbacks 00:05:02.231 ************************************ 00:05:02.231 19:14:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:02.231 EAL: Detected CPU lcores: 48 00:05:02.231 EAL: Detected NUMA nodes: 2 00:05:02.231 EAL: Detected shared linkage of DPDK 00:05:02.231 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:02.231 EAL: Selected IOVA mode 'VA' 00:05:02.231 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.231 EAL: VFIO support initialized 00:05:02.231 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:02.231 00:05:02.231 00:05:02.231 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.231 http://cunit.sourceforge.net/ 00:05:02.231 00:05:02.231 00:05:02.231 Suite: memory 00:05:02.231 Test: test ... 00:05:02.231 register 0x200000200000 2097152 00:05:02.231 malloc 3145728 00:05:02.231 register 0x200000400000 4194304 00:05:02.231 buf 0x200000500000 len 3145728 PASSED 00:05:02.231 malloc 64 00:05:02.231 buf 0x2000004fff40 len 64 PASSED 00:05:02.231 malloc 4194304 00:05:02.231 register 0x200000800000 6291456 00:05:02.231 buf 0x200000a00000 len 4194304 PASSED 00:05:02.231 free 0x200000500000 3145728 00:05:02.231 free 0x2000004fff40 64 00:05:02.231 unregister 0x200000400000 4194304 PASSED 00:05:02.231 free 0x200000a00000 4194304 00:05:02.231 unregister 0x200000800000 6291456 PASSED 00:05:02.231 malloc 8388608 00:05:02.231 register 0x200000400000 10485760 00:05:02.231 buf 0x200000600000 len 8388608 PASSED 00:05:02.231 free 0x200000600000 8388608 00:05:02.231 unregister 0x200000400000 10485760 PASSED 00:05:02.231 passed 00:05:02.231 00:05:02.231 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.231 suites 1 1 n/a 0 0 00:05:02.231 tests 1 1 1 0 0 00:05:02.231 asserts 15 15 15 0 n/a 00:05:02.231 00:05:02.231 Elapsed time = 0.005 seconds 00:05:02.231 00:05:02.231 real 0m0.047s 00:05:02.231 user 0m0.010s 00:05:02.231 sys 0m0.037s 00:05:02.231 19:14:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.231 19:14:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.231 ************************************ 00:05:02.231 END TEST env_mem_callbacks 00:05:02.231 ************************************ 00:05:02.231 00:05:02.231 real 0m6.348s 00:05:02.231 user 0m4.425s 00:05:02.231 sys 0m0.971s 00:05:02.231 19:14:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.231 19:14:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.231 ************************************ 00:05:02.231 END TEST env 00:05:02.231 ************************************ 00:05:02.231 19:14:00 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:02.231 19:14:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.231 19:14:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.231 19:14:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.231 ************************************ 00:05:02.231 START TEST rpc 00:05:02.231 ************************************ 00:05:02.231 19:14:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:02.231 * Looking for test storage... 00:05:02.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.231 19:14:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.231 19:14:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.231 19:14:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.490 19:14:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.490 19:14:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.490 19:14:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.490 19:14:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.490 19:14:00 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.490 19:14:00 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.490 19:14:00 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.490 19:14:00 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.490 19:14:00 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.490 19:14:00 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.490 19:14:00 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.490 19:14:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.490 19:14:00 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.490 19:14:00 -- scripts/common.sh@344 -- # : 1 00:05:02.490 19:14:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.490 19:14:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.490 19:14:00 -- scripts/common.sh@364 -- # decimal 1 00:05:02.490 19:14:00 -- scripts/common.sh@352 -- # local d=1 00:05:02.490 19:14:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.490 19:14:00 -- scripts/common.sh@354 -- # echo 1 00:05:02.490 19:14:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.490 19:14:00 -- scripts/common.sh@365 -- # decimal 2 00:05:02.490 19:14:00 -- scripts/common.sh@352 -- # local d=2 00:05:02.490 19:14:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.490 19:14:00 -- scripts/common.sh@354 -- # echo 2 00:05:02.490 19:14:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.490 19:14:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.490 19:14:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.490 19:14:00 -- scripts/common.sh@367 -- # return 0 00:05:02.490 19:14:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.490 19:14:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.490 --rc genhtml_branch_coverage=1 00:05:02.490 --rc genhtml_function_coverage=1 00:05:02.490 --rc genhtml_legend=1 00:05:02.490 --rc geninfo_all_blocks=1 00:05:02.490 --rc geninfo_unexecuted_blocks=1 00:05:02.490 00:05:02.490 ' 00:05:02.490 19:14:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.490 --rc genhtml_branch_coverage=1 00:05:02.490 --rc genhtml_function_coverage=1 00:05:02.490 --rc genhtml_legend=1 00:05:02.490 --rc geninfo_all_blocks=1 00:05:02.490 --rc geninfo_unexecuted_blocks=1 00:05:02.490 00:05:02.490 ' 00:05:02.490 19:14:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.490 --rc genhtml_branch_coverage=1 00:05:02.490 --rc genhtml_function_coverage=1 00:05:02.490 --rc genhtml_legend=1 00:05:02.490 --rc geninfo_all_blocks=1 00:05:02.490 --rc geninfo_unexecuted_blocks=1 00:05:02.490 00:05:02.490 ' 00:05:02.490 19:14:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.490 --rc genhtml_branch_coverage=1 00:05:02.490 --rc genhtml_function_coverage=1 00:05:02.490 --rc genhtml_legend=1 00:05:02.490 --rc geninfo_all_blocks=1 00:05:02.490 --rc geninfo_unexecuted_blocks=1 00:05:02.490 00:05:02.490 ' 00:05:02.490 19:14:00 -- rpc/rpc.sh@65 -- # spdk_pid=1072703 00:05:02.490 19:14:00 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:02.490 19:14:00 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.490 19:14:00 -- rpc/rpc.sh@67 -- # waitforlisten 1072703 00:05:02.490 19:14:00 -- common/autotest_common.sh@829 -- # '[' -z 1072703 ']' 00:05:02.490 19:14:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.490 19:14:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.490 19:14:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.490 19:14:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.490 19:14:00 -- common/autotest_common.sh@10 -- # set +x 00:05:02.490 [2024-11-17 19:14:00.626479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:02.490 [2024-11-17 19:14:00.626555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072703 ] 00:05:02.490 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.490 [2024-11-17 19:14:00.683132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.749 [2024-11-17 19:14:00.771109] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.749 [2024-11-17 19:14:00.771236] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.749 [2024-11-17 19:14:00.771253] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1072703' to capture a snapshot of events at runtime. 00:05:02.749 [2024-11-17 19:14:00.771264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1072703 for offline analysis/debug. 00:05:02.749 [2024-11-17 19:14:00.771293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.683 19:14:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.683 19:14:01 -- common/autotest_common.sh@862 -- # return 0 00:05:03.684 19:14:01 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.684 19:14:01 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.684 19:14:01 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:03.684 19:14:01 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:03.684 19:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.684 19:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 ************************************ 00:05:03.684 START TEST rpc_integrity 00:05:03.684 ************************************ 00:05:03.684 19:14:01 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:03.684 19:14:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.684 19:14:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.684 19:14:01 -- rpc/rpc.sh@13 -- # jq length 00:05:03.684 19:14:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.684 19:14:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.684 19:14:01 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:03.684 19:14:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.684 19:14:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.684 { 00:05:03.684 "name": "Malloc0", 00:05:03.684 "aliases": [ 00:05:03.684 "8c3a15ac-6f70-4c32-9313-cf69ae3e5c82" 00:05:03.684 ], 00:05:03.684 "product_name": "Malloc disk", 00:05:03.684 "block_size": 512, 00:05:03.684 "num_blocks": 16384, 00:05:03.684 "uuid": "8c3a15ac-6f70-4c32-9313-cf69ae3e5c82", 00:05:03.684 "assigned_rate_limits": { 00:05:03.684 "rw_ios_per_sec": 0, 00:05:03.684 "rw_mbytes_per_sec": 0, 00:05:03.684 "r_mbytes_per_sec": 0, 00:05:03.684 "w_mbytes_per_sec": 0 00:05:03.684 }, 00:05:03.684 "claimed": false, 00:05:03.684 "zoned": false, 00:05:03.684 "supported_io_types": { 00:05:03.684 "read": true, 00:05:03.684 "write": true, 00:05:03.684 "unmap": true, 00:05:03.684 "write_zeroes": true, 00:05:03.684 "flush": true, 00:05:03.684 "reset": true, 00:05:03.684 "compare": false, 00:05:03.684 "compare_and_write": false, 00:05:03.684 "abort": true, 00:05:03.684 "nvme_admin": false, 00:05:03.684 "nvme_io": false 00:05:03.684 }, 00:05:03.684 "memory_domains": [ 00:05:03.684 { 00:05:03.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.684 "dma_device_type": 2 00:05:03.684 } 00:05:03.684 ], 00:05:03.684 "driver_specific": {} 00:05:03.684 } 00:05:03.684 ]' 00:05:03.684 19:14:01 -- rpc/rpc.sh@17 -- # jq length 00:05:03.684 19:14:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.684 19:14:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 [2024-11-17 19:14:01.688555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:03.684 [2024-11-17 19:14:01.688593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.684 [2024-11-17 19:14:01.688613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1381b20 00:05:03.684 [2024-11-17 19:14:01.688625] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.684 [2024-11-17 19:14:01.689873] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.684 [2024-11-17 19:14:01.689900] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.684 Passthru0 00:05:03.684 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.684 19:14:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.684 19:14:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.684 { 00:05:03.684 "name": "Malloc0", 00:05:03.684 "aliases": [ 00:05:03.684 "8c3a15ac-6f70-4c32-9313-cf69ae3e5c82" 00:05:03.684 ], 00:05:03.684 "product_name": "Malloc disk", 00:05:03.684 "block_size": 512, 00:05:03.684 "num_blocks": 16384, 00:05:03.684 "uuid": "8c3a15ac-6f70-4c32-9313-cf69ae3e5c82", 00:05:03.684 "assigned_rate_limits": { 00:05:03.684 "rw_ios_per_sec": 0, 00:05:03.684 "rw_mbytes_per_sec": 0, 00:05:03.684 "r_mbytes_per_sec": 0, 00:05:03.684 "w_mbytes_per_sec": 0 00:05:03.684 }, 00:05:03.684 "claimed": true, 00:05:03.684 "claim_type": "exclusive_write", 00:05:03.684 "zoned": false, 00:05:03.684 "supported_io_types": { 00:05:03.684 "read": true, 00:05:03.684 "write": true, 00:05:03.684 "unmap": true, 00:05:03.684 "write_zeroes": true, 00:05:03.684 "flush": true, 00:05:03.684 "reset": true, 00:05:03.684 "compare": false, 00:05:03.684 "compare_and_write": false, 00:05:03.684 "abort": true, 00:05:03.684 "nvme_admin": false, 00:05:03.684 "nvme_io": false 00:05:03.684 }, 00:05:03.684 "memory_domains": [ 00:05:03.684 { 00:05:03.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.684 "dma_device_type": 2 00:05:03.684 } 00:05:03.684 ], 00:05:03.684 "driver_specific": {} 00:05:03.684 }, 00:05:03.684 { 00:05:03.684 "name": "Passthru0", 00:05:03.684 "aliases": [ 00:05:03.684 "9c885267-44ae-5d3b-9e42-eb6baade6e4a" 00:05:03.684 ], 00:05:03.684 "product_name": "passthru", 00:05:03.684 "block_size": 512, 00:05:03.684 "num_blocks": 16384, 00:05:03.684 "uuid": "9c885267-44ae-5d3b-9e42-eb6baade6e4a", 00:05:03.684 "assigned_rate_limits": { 00:05:03.684 "rw_ios_per_sec": 0, 00:05:03.684 "rw_mbytes_per_sec": 0, 00:05:03.684 "r_mbytes_per_sec": 0, 00:05:03.684 "w_mbytes_per_sec": 0 00:05:03.684 }, 00:05:03.684 "claimed": false, 00:05:03.684 "zoned": false, 00:05:03.684 "supported_io_types": { 00:05:03.684 "read": true, 00:05:03.684 "write": true, 00:05:03.684 "unmap": true, 00:05:03.684 "write_zeroes": true, 00:05:03.684 "flush": true, 00:05:03.684 "reset": true, 00:05:03.684 "compare": false, 00:05:03.684 "compare_and_write": false, 00:05:03.684 "abort": true, 00:05:03.684 "nvme_admin": false, 00:05:03.684 "nvme_io": false 00:05:03.684 }, 00:05:03.684 "memory_domains": [ 00:05:03.684 { 00:05:03.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.684 "dma_device_type": 2 00:05:03.684 } 00:05:03.684 ], 00:05:03.684 "driver_specific": { 00:05:03.684 "passthru": { 00:05:03.684 "name": "Passthru0", 00:05:03.684 "base_bdev_name": "Malloc0" 00:05:03.684 } 00:05:03.684 } 00:05:03.684 } 00:05:03.684 ]' 00:05:03.684 19:14:01 -- rpc/rpc.sh@21 -- # jq length 00:05:03.684 19:14:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.684 19:14:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.684 19:14:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.684 19:14:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.684 19:14:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.684 19:14:01 -- rpc/rpc.sh@26 -- # jq length 00:05:03.684 19:14:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.684 00:05:03.684 real 0m0.211s 00:05:03.684 user 0m0.138s 00:05:03.684 sys 0m0.014s 00:05:03.684 19:14:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 ************************************ 00:05:03.684 END TEST rpc_integrity 00:05:03.684 ************************************ 00:05:03.684 19:14:01 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.684 19:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.684 19:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 ************************************ 00:05:03.684 START TEST rpc_plugins 00:05:03.684 ************************************ 00:05:03.684 19:14:01 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:03.684 19:14:01 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.684 19:14:01 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.684 19:14:01 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.684 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.684 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.685 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.685 19:14:01 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.685 { 00:05:03.685 "name": "Malloc1", 00:05:03.685 "aliases": [ 00:05:03.685 "5654bd0b-032f-4689-9f91-8ca147950056" 00:05:03.685 ], 00:05:03.685 "product_name": "Malloc disk", 00:05:03.685 "block_size": 4096, 00:05:03.685 "num_blocks": 256, 00:05:03.685 "uuid": "5654bd0b-032f-4689-9f91-8ca147950056", 00:05:03.685 "assigned_rate_limits": { 00:05:03.685 "rw_ios_per_sec": 0, 00:05:03.685 "rw_mbytes_per_sec": 0, 00:05:03.685 "r_mbytes_per_sec": 0, 00:05:03.685 "w_mbytes_per_sec": 0 00:05:03.685 }, 00:05:03.685 "claimed": false, 00:05:03.685 "zoned": false, 00:05:03.685 "supported_io_types": { 00:05:03.685 "read": true, 00:05:03.685 "write": true, 00:05:03.685 "unmap": true, 00:05:03.685 "write_zeroes": true, 00:05:03.685 "flush": true, 00:05:03.685 "reset": true, 00:05:03.685 "compare": false, 00:05:03.685 "compare_and_write": false, 00:05:03.685 "abort": true, 00:05:03.685 "nvme_admin": false, 00:05:03.685 "nvme_io": false 00:05:03.685 }, 00:05:03.685 "memory_domains": [ 00:05:03.685 { 00:05:03.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.685 "dma_device_type": 2 00:05:03.685 } 00:05:03.685 ], 00:05:03.685 "driver_specific": {} 00:05:03.685 } 00:05:03.685 ]' 00:05:03.685 19:14:01 -- rpc/rpc.sh@32 -- # jq length 00:05:03.685 19:14:01 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.685 19:14:01 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.685 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.685 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.685 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.685 19:14:01 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.685 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.685 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.685 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.685 19:14:01 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.685 19:14:01 -- rpc/rpc.sh@36 -- # jq length 00:05:03.685 19:14:01 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.685 00:05:03.685 real 0m0.108s 00:05:03.685 user 0m0.065s 00:05:03.685 sys 0m0.008s 00:05:03.685 19:14:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.685 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.685 ************************************ 00:05:03.685 END TEST rpc_plugins 00:05:03.685 ************************************ 00:05:03.943 19:14:01 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.943 19:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.943 19:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.943 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.943 ************************************ 00:05:03.943 START TEST rpc_trace_cmd_test 00:05:03.943 ************************************ 00:05:03.943 19:14:01 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:03.943 19:14:01 -- rpc/rpc.sh@40 -- # local info 00:05:03.943 19:14:01 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.943 19:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.943 19:14:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.943 19:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.943 19:14:01 -- rpc/rpc.sh@42 -- # info='{ 00:05:03.943 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1072703", 00:05:03.943 "tpoint_group_mask": "0x8", 00:05:03.943 "iscsi_conn": { 00:05:03.943 "mask": "0x2", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "scsi": { 00:05:03.943 "mask": "0x4", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "bdev": { 00:05:03.943 "mask": "0x8", 00:05:03.943 "tpoint_mask": "0xffffffffffffffff" 00:05:03.943 }, 00:05:03.943 "nvmf_rdma": { 00:05:03.943 "mask": "0x10", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "nvmf_tcp": { 00:05:03.943 "mask": "0x20", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "ftl": { 00:05:03.943 "mask": "0x40", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "blobfs": { 00:05:03.943 "mask": "0x80", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "dsa": { 00:05:03.943 "mask": "0x200", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "thread": { 00:05:03.943 "mask": "0x400", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "nvme_pcie": { 00:05:03.943 "mask": "0x800", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "iaa": { 00:05:03.943 "mask": "0x1000", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "nvme_tcp": { 00:05:03.943 "mask": "0x2000", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 }, 00:05:03.943 "bdev_nvme": { 00:05:03.943 "mask": "0x4000", 00:05:03.943 "tpoint_mask": "0x0" 00:05:03.943 } 00:05:03.943 }' 00:05:03.943 19:14:01 -- rpc/rpc.sh@43 -- # jq length 00:05:03.943 19:14:02 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:03.943 19:14:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.943 19:14:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.943 19:14:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.944 19:14:02 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.944 19:14:02 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.944 19:14:02 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.944 19:14:02 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.944 19:14:02 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.944 00:05:03.944 real 0m0.177s 00:05:03.944 user 0m0.159s 00:05:03.944 sys 0m0.011s 00:05:03.944 19:14:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.944 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.944 ************************************ 00:05:03.944 END TEST rpc_trace_cmd_test 00:05:03.944 ************************************ 00:05:03.944 19:14:02 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.944 19:14:02 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.944 19:14:02 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.944 19:14:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.944 19:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.944 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.944 ************************************ 00:05:03.944 START TEST rpc_daemon_integrity 00:05:03.944 ************************************ 00:05:03.944 19:14:02 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:03.944 19:14:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.944 19:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.944 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.944 19:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.944 19:14:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.944 19:14:02 -- rpc/rpc.sh@13 -- # jq length 00:05:04.202 19:14:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.202 19:14:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.202 19:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.202 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.202 19:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.202 19:14:02 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:04.202 19:14:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.202 19:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.202 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.202 19:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.202 19:14:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.202 { 00:05:04.202 "name": "Malloc2", 00:05:04.202 "aliases": [ 00:05:04.202 "45cdb208-d8ae-4b24-bcd4-abbbee4dcce6" 00:05:04.202 ], 00:05:04.202 "product_name": "Malloc disk", 00:05:04.202 "block_size": 512, 00:05:04.202 "num_blocks": 16384, 00:05:04.202 "uuid": "45cdb208-d8ae-4b24-bcd4-abbbee4dcce6", 00:05:04.202 "assigned_rate_limits": { 00:05:04.202 "rw_ios_per_sec": 0, 00:05:04.202 "rw_mbytes_per_sec": 0, 00:05:04.202 "r_mbytes_per_sec": 0, 00:05:04.202 "w_mbytes_per_sec": 0 00:05:04.202 }, 00:05:04.202 "claimed": false, 00:05:04.202 "zoned": false, 00:05:04.202 "supported_io_types": { 00:05:04.202 "read": true, 00:05:04.202 "write": true, 00:05:04.202 "unmap": true, 00:05:04.202 "write_zeroes": true, 00:05:04.202 "flush": true, 00:05:04.202 "reset": true, 00:05:04.202 "compare": false, 00:05:04.202 "compare_and_write": false, 00:05:04.202 "abort": true, 00:05:04.202 "nvme_admin": false, 00:05:04.202 "nvme_io": false 00:05:04.202 }, 00:05:04.202 "memory_domains": [ 00:05:04.202 { 00:05:04.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.202 "dma_device_type": 2 00:05:04.202 } 00:05:04.202 ], 00:05:04.202 "driver_specific": {} 00:05:04.202 } 00:05:04.202 ]' 00:05:04.202 19:14:02 -- rpc/rpc.sh@17 -- # jq length 00:05:04.202 19:14:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.202 19:14:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:04.202 19:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.202 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.202 [2024-11-17 19:14:02.270310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:04.202 [2024-11-17 19:14:02.270359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.202 [2024-11-17 19:14:02.270384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1380ec0 00:05:04.202 [2024-11-17 19:14:02.270397] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.202 [2024-11-17 19:14:02.271628] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.202 [2024-11-17 19:14:02.271651] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.202 Passthru0 00:05:04.202 19:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.202 19:14:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.202 19:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.202 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.202 19:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.202 19:14:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.202 { 00:05:04.202 "name": "Malloc2", 00:05:04.202 "aliases": [ 00:05:04.202 "45cdb208-d8ae-4b24-bcd4-abbbee4dcce6" 00:05:04.202 ], 00:05:04.202 "product_name": "Malloc disk", 00:05:04.202 "block_size": 512, 00:05:04.202 "num_blocks": 16384, 00:05:04.202 "uuid": "45cdb208-d8ae-4b24-bcd4-abbbee4dcce6", 00:05:04.203 "assigned_rate_limits": { 00:05:04.203 "rw_ios_per_sec": 0, 00:05:04.203 "rw_mbytes_per_sec": 0, 00:05:04.203 "r_mbytes_per_sec": 0, 00:05:04.203 "w_mbytes_per_sec": 0 00:05:04.203 }, 00:05:04.203 "claimed": true, 00:05:04.203 "claim_type": "exclusive_write", 00:05:04.203 "zoned": false, 00:05:04.203 "supported_io_types": { 00:05:04.203 "read": true, 00:05:04.203 "write": true, 00:05:04.203 "unmap": true, 00:05:04.203 "write_zeroes": true, 00:05:04.203 "flush": true, 00:05:04.203 "reset": true, 00:05:04.203 "compare": false, 00:05:04.203 "compare_and_write": false, 00:05:04.203 "abort": true, 00:05:04.203 "nvme_admin": false, 00:05:04.203 "nvme_io": false 00:05:04.203 }, 00:05:04.203 "memory_domains": [ 00:05:04.203 { 00:05:04.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.203 "dma_device_type": 2 00:05:04.203 } 00:05:04.203 ], 00:05:04.203 "driver_specific": {} 00:05:04.203 }, 00:05:04.203 { 00:05:04.203 "name": "Passthru0", 00:05:04.203 "aliases": [ 00:05:04.203 "56e0fa5f-d5a5-5a76-a9fa-6f840fcbe5d1" 00:05:04.203 ], 00:05:04.203 "product_name": "passthru", 00:05:04.203 "block_size": 512, 00:05:04.203 "num_blocks": 16384, 00:05:04.203 "uuid": "56e0fa5f-d5a5-5a76-a9fa-6f840fcbe5d1", 00:05:04.203 "assigned_rate_limits": { 00:05:04.203 "rw_ios_per_sec": 0, 00:05:04.203 "rw_mbytes_per_sec": 0, 00:05:04.203 "r_mbytes_per_sec": 0, 00:05:04.203 "w_mbytes_per_sec": 0 00:05:04.203 }, 00:05:04.203 "claimed": false, 00:05:04.203 "zoned": false, 00:05:04.203 "supported_io_types": { 00:05:04.203 "read": true, 00:05:04.203 "write": true, 00:05:04.203 "unmap": true, 00:05:04.203 "write_zeroes": true, 00:05:04.203 "flush": true, 00:05:04.203 "reset": true, 00:05:04.203 "compare": false, 00:05:04.203 "compare_and_write": false, 00:05:04.203 "abort": true, 00:05:04.203 "nvme_admin": false, 00:05:04.203 "nvme_io": false 00:05:04.203 }, 00:05:04.203 "memory_domains": [ 00:05:04.203 { 00:05:04.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.203 "dma_device_type": 2 00:05:04.203 } 00:05:04.203 ], 00:05:04.203 "driver_specific": { 00:05:04.203 "passthru": { 00:05:04.203 "name": "Passthru0", 00:05:04.203 "base_bdev_name": "Malloc2" 00:05:04.203 } 00:05:04.203 } 00:05:04.203 } 00:05:04.203 ]' 00:05:04.203 19:14:02 -- rpc/rpc.sh@21 -- # jq length 00:05:04.203 19:14:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.203 19:14:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.203 19:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.203 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.203 19:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.203 19:14:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.203 19:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.203 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.203 19:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.203 19:14:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.203 19:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.203 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.203 19:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.203 19:14:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.203 19:14:02 -- rpc/rpc.sh@26 -- # jq length 00:05:04.203 19:14:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.203 00:05:04.203 real 0m0.216s 00:05:04.203 user 0m0.142s 00:05:04.203 sys 0m0.022s 00:05:04.203 19:14:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.203 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.203 ************************************ 00:05:04.203 END TEST rpc_daemon_integrity 00:05:04.203 ************************************ 00:05:04.203 19:14:02 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:04.203 19:14:02 -- rpc/rpc.sh@84 -- # killprocess 1072703 00:05:04.203 19:14:02 -- common/autotest_common.sh@936 -- # '[' -z 1072703 ']' 00:05:04.203 19:14:02 -- common/autotest_common.sh@940 -- # kill -0 1072703 00:05:04.203 19:14:02 -- common/autotest_common.sh@941 -- # uname 00:05:04.203 19:14:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.203 19:14:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1072703 00:05:04.203 19:14:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.203 19:14:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.203 19:14:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1072703' 00:05:04.203 killing process with pid 1072703 00:05:04.203 19:14:02 -- common/autotest_common.sh@955 -- # kill 1072703 00:05:04.203 19:14:02 -- common/autotest_common.sh@960 -- # wait 1072703 00:05:04.770 00:05:04.770 real 0m2.399s 00:05:04.770 user 0m3.019s 00:05:04.770 sys 0m0.587s 00:05:04.770 19:14:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.770 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.770 ************************************ 00:05:04.770 END TEST rpc 00:05:04.770 ************************************ 00:05:04.770 19:14:02 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:04.770 19:14:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.770 19:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.770 19:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:04.770 ************************************ 00:05:04.770 START TEST rpc_client 00:05:04.770 ************************************ 00:05:04.770 19:14:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:04.770 * Looking for test storage... 00:05:04.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:04.770 19:14:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:04.770 19:14:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:04.770 19:14:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:04.770 19:14:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:04.770 19:14:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:04.770 19:14:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:04.770 19:14:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:04.770 19:14:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:04.770 19:14:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:04.770 19:14:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.770 19:14:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:04.771 19:14:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:04.771 19:14:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:04.771 19:14:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:04.771 19:14:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:04.771 19:14:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:04.771 19:14:02 -- scripts/common.sh@344 -- # : 1 00:05:04.771 19:14:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:04.771 19:14:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.771 19:14:02 -- scripts/common.sh@364 -- # decimal 1 00:05:04.771 19:14:03 -- scripts/common.sh@352 -- # local d=1 00:05:04.771 19:14:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.771 19:14:03 -- scripts/common.sh@354 -- # echo 1 00:05:04.771 19:14:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:04.771 19:14:03 -- scripts/common.sh@365 -- # decimal 2 00:05:04.771 19:14:03 -- scripts/common.sh@352 -- # local d=2 00:05:04.771 19:14:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.771 19:14:03 -- scripts/common.sh@354 -- # echo 2 00:05:04.771 19:14:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:04.771 19:14:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:04.771 19:14:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:04.771 19:14:03 -- scripts/common.sh@367 -- # return 0 00:05:04.771 19:14:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.771 19:14:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:04.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.771 --rc genhtml_branch_coverage=1 00:05:04.771 --rc genhtml_function_coverage=1 00:05:04.771 --rc genhtml_legend=1 00:05:04.771 --rc geninfo_all_blocks=1 00:05:04.771 --rc geninfo_unexecuted_blocks=1 00:05:04.771 00:05:04.771 ' 00:05:04.771 19:14:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:04.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.771 --rc genhtml_branch_coverage=1 00:05:04.771 --rc genhtml_function_coverage=1 00:05:04.771 --rc genhtml_legend=1 00:05:04.771 --rc geninfo_all_blocks=1 00:05:04.771 --rc geninfo_unexecuted_blocks=1 00:05:04.771 00:05:04.771 ' 00:05:04.771 19:14:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:04.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.771 --rc genhtml_branch_coverage=1 00:05:04.771 --rc genhtml_function_coverage=1 00:05:04.771 --rc genhtml_legend=1 00:05:04.771 --rc geninfo_all_blocks=1 00:05:04.771 --rc geninfo_unexecuted_blocks=1 00:05:04.771 00:05:04.771 ' 00:05:04.771 19:14:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:04.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.771 --rc genhtml_branch_coverage=1 00:05:04.771 --rc genhtml_function_coverage=1 00:05:04.771 --rc genhtml_legend=1 00:05:04.771 --rc geninfo_all_blocks=1 00:05:04.771 --rc geninfo_unexecuted_blocks=1 00:05:04.771 00:05:04.771 ' 00:05:04.771 19:14:03 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:04.771 OK 00:05:04.771 19:14:03 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:04.771 00:05:04.771 real 0m0.160s 00:05:04.771 user 0m0.098s 00:05:04.771 sys 0m0.072s 00:05:04.771 19:14:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.771 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.771 ************************************ 00:05:04.771 END TEST rpc_client 00:05:04.771 ************************************ 00:05:05.030 19:14:03 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:05.030 19:14:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.030 19:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.030 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.030 ************************************ 00:05:05.030 START TEST json_config 00:05:05.030 ************************************ 00:05:05.030 19:14:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:05.030 19:14:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.030 19:14:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.030 19:14:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.030 19:14:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.030 19:14:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.030 19:14:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.030 19:14:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.030 19:14:03 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.030 19:14:03 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.030 19:14:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.030 19:14:03 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.030 19:14:03 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.030 19:14:03 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.030 19:14:03 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.030 19:14:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.030 19:14:03 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.030 19:14:03 -- scripts/common.sh@344 -- # : 1 00:05:05.030 19:14:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.030 19:14:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.030 19:14:03 -- scripts/common.sh@364 -- # decimal 1 00:05:05.030 19:14:03 -- scripts/common.sh@352 -- # local d=1 00:05:05.030 19:14:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.030 19:14:03 -- scripts/common.sh@354 -- # echo 1 00:05:05.030 19:14:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.030 19:14:03 -- scripts/common.sh@365 -- # decimal 2 00:05:05.030 19:14:03 -- scripts/common.sh@352 -- # local d=2 00:05:05.030 19:14:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.030 19:14:03 -- scripts/common.sh@354 -- # echo 2 00:05:05.030 19:14:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.030 19:14:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.030 19:14:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.030 19:14:03 -- scripts/common.sh@367 -- # return 0 00:05:05.030 19:14:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.030 19:14:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.030 --rc genhtml_branch_coverage=1 00:05:05.030 --rc genhtml_function_coverage=1 00:05:05.030 --rc genhtml_legend=1 00:05:05.030 --rc geninfo_all_blocks=1 00:05:05.030 --rc geninfo_unexecuted_blocks=1 00:05:05.030 00:05:05.030 ' 00:05:05.030 19:14:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.030 --rc genhtml_branch_coverage=1 00:05:05.030 --rc genhtml_function_coverage=1 00:05:05.030 --rc genhtml_legend=1 00:05:05.030 --rc geninfo_all_blocks=1 00:05:05.030 --rc geninfo_unexecuted_blocks=1 00:05:05.030 00:05:05.030 ' 00:05:05.030 19:14:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.030 --rc genhtml_branch_coverage=1 00:05:05.030 --rc genhtml_function_coverage=1 00:05:05.030 --rc genhtml_legend=1 00:05:05.030 --rc geninfo_all_blocks=1 00:05:05.030 --rc geninfo_unexecuted_blocks=1 00:05:05.030 00:05:05.030 ' 00:05:05.030 19:14:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.030 --rc genhtml_branch_coverage=1 00:05:05.030 --rc genhtml_function_coverage=1 00:05:05.030 --rc genhtml_legend=1 00:05:05.030 --rc geninfo_all_blocks=1 00:05:05.030 --rc geninfo_unexecuted_blocks=1 00:05:05.030 00:05:05.030 ' 00:05:05.030 19:14:03 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.030 19:14:03 -- nvmf/common.sh@7 -- # uname -s 00:05:05.030 19:14:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.030 19:14:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.030 19:14:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.030 19:14:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.030 19:14:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.030 19:14:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.030 19:14:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.030 19:14:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.030 19:14:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.030 19:14:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.030 19:14:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:05.030 19:14:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:05.030 19:14:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.031 19:14:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.031 19:14:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.031 19:14:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.031 19:14:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.031 19:14:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.031 19:14:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.031 19:14:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.031 19:14:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.031 19:14:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.031 19:14:03 -- paths/export.sh@5 -- # export PATH 00:05:05.031 19:14:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.031 19:14:03 -- nvmf/common.sh@46 -- # : 0 00:05:05.031 19:14:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:05.031 19:14:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:05.031 19:14:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:05.031 19:14:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.031 19:14:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.031 19:14:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:05.031 19:14:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:05.031 19:14:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:05.031 19:14:03 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:05.031 19:14:03 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:05.031 19:14:03 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:05.031 19:14:03 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:05.031 19:14:03 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:05.031 19:14:03 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:05.031 19:14:03 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:05.031 19:14:03 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:05.031 19:14:03 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:05.031 19:14:03 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:05.031 19:14:03 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:05.031 19:14:03 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:05.031 19:14:03 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:05.031 19:14:03 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.031 19:14:03 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:05.031 INFO: JSON configuration test init 00:05:05.031 19:14:03 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:05.031 19:14:03 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:05.031 19:14:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.031 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.031 19:14:03 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:05.031 19:14:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.031 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.031 19:14:03 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:05.031 19:14:03 -- json_config/json_config.sh@98 -- # local app=target 00:05:05.031 19:14:03 -- json_config/json_config.sh@99 -- # shift 00:05:05.031 19:14:03 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:05.031 19:14:03 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:05.031 19:14:03 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:05.031 19:14:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:05.031 19:14:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:05.031 19:14:03 -- json_config/json_config.sh@111 -- # app_pid[$app]=1073197 00:05:05.031 19:14:03 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:05.031 19:14:03 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:05.031 Waiting for target to run... 00:05:05.031 19:14:03 -- json_config/json_config.sh@114 -- # waitforlisten 1073197 /var/tmp/spdk_tgt.sock 00:05:05.031 19:14:03 -- common/autotest_common.sh@829 -- # '[' -z 1073197 ']' 00:05:05.031 19:14:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.031 19:14:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.031 19:14:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.031 19:14:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.031 19:14:03 -- common/autotest_common.sh@10 -- # set +x 00:05:05.031 [2024-11-17 19:14:03.248271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:05.031 [2024-11-17 19:14:03.248372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1073197 ] 00:05:05.031 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.601 [2024-11-17 19:14:03.778467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.601 [2024-11-17 19:14:03.850729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:05.601 [2024-11-17 19:14:03.850898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.167 19:14:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.167 19:14:04 -- common/autotest_common.sh@862 -- # return 0 00:05:06.167 19:14:04 -- json_config/json_config.sh@115 -- # echo '' 00:05:06.167 00:05:06.167 19:14:04 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:06.167 19:14:04 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:06.167 19:14:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.167 19:14:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.167 19:14:04 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:06.167 19:14:04 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:06.167 19:14:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:06.167 19:14:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.167 19:14:04 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:06.167 19:14:04 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:06.167 19:14:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:09.454 19:14:07 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:09.454 19:14:07 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:09.454 19:14:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.454 19:14:07 -- common/autotest_common.sh@10 -- # set +x 00:05:09.454 19:14:07 -- json_config/json_config.sh@48 -- # local ret=0 00:05:09.455 19:14:07 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:09.455 19:14:07 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:09.455 19:14:07 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:09.455 19:14:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:09.455 19:14:07 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:09.455 19:14:07 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:09.455 19:14:07 -- json_config/json_config.sh@51 -- # local get_types 00:05:09.455 19:14:07 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:09.455 19:14:07 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:09.455 19:14:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.455 19:14:07 -- common/autotest_common.sh@10 -- # set +x 00:05:09.455 19:14:07 -- json_config/json_config.sh@58 -- # return 0 00:05:09.455 19:14:07 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:09.455 19:14:07 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:09.455 19:14:07 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:09.455 19:14:07 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:09.455 19:14:07 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:09.455 19:14:07 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:09.455 19:14:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.455 19:14:07 -- common/autotest_common.sh@10 -- # set +x 00:05:09.455 19:14:07 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:09.455 19:14:07 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:09.455 19:14:07 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:09.455 19:14:07 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.455 19:14:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.713 MallocForNvmf0 00:05:09.713 19:14:07 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.713 19:14:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.971 MallocForNvmf1 00:05:09.971 19:14:08 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.971 19:14:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.229 [2024-11-17 19:14:08.391296] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.229 19:14:08 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.229 19:14:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.488 19:14:08 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.488 19:14:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.785 19:14:08 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.785 19:14:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.070 19:14:09 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.070 19:14:09 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.328 [2024-11-17 19:14:09.386461] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.328 19:14:09 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:11.328 19:14:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.328 19:14:09 -- common/autotest_common.sh@10 -- # set +x 00:05:11.328 19:14:09 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:11.328 19:14:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.328 19:14:09 -- common/autotest_common.sh@10 -- # set +x 00:05:11.328 19:14:09 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:11.328 19:14:09 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.328 19:14:09 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.586 MallocBdevForConfigChangeCheck 00:05:11.586 19:14:09 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:11.586 19:14:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.586 19:14:09 -- common/autotest_common.sh@10 -- # set +x 00:05:11.586 19:14:09 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:11.586 19:14:09 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.844 19:14:10 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:11.844 INFO: shutting down applications... 00:05:11.844 19:14:10 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:11.844 19:14:10 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:11.844 19:14:10 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:11.844 19:14:10 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:13.744 Calling clear_iscsi_subsystem 00:05:13.744 Calling clear_nvmf_subsystem 00:05:13.744 Calling clear_nbd_subsystem 00:05:13.744 Calling clear_ublk_subsystem 00:05:13.744 Calling clear_vhost_blk_subsystem 00:05:13.744 Calling clear_vhost_scsi_subsystem 00:05:13.744 Calling clear_scheduler_subsystem 00:05:13.744 Calling clear_bdev_subsystem 00:05:13.744 Calling clear_accel_subsystem 00:05:13.744 Calling clear_vmd_subsystem 00:05:13.744 Calling clear_sock_subsystem 00:05:13.744 Calling clear_iobuf_subsystem 00:05:13.744 19:14:11 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:13.744 19:14:11 -- json_config/json_config.sh@396 -- # count=100 00:05:13.744 19:14:11 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:13.744 19:14:11 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.744 19:14:11 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:13.744 19:14:11 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:14.002 19:14:12 -- json_config/json_config.sh@398 -- # break 00:05:14.002 19:14:12 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:14.002 19:14:12 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:14.002 19:14:12 -- json_config/json_config.sh@120 -- # local app=target 00:05:14.002 19:14:12 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:14.002 19:14:12 -- json_config/json_config.sh@124 -- # [[ -n 1073197 ]] 00:05:14.002 19:14:12 -- json_config/json_config.sh@127 -- # kill -SIGINT 1073197 00:05:14.002 19:14:12 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:14.002 19:14:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:14.002 19:14:12 -- json_config/json_config.sh@130 -- # kill -0 1073197 00:05:14.002 19:14:12 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:14.571 19:14:12 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:14.571 19:14:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:14.571 19:14:12 -- json_config/json_config.sh@130 -- # kill -0 1073197 00:05:14.571 19:14:12 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:14.571 19:14:12 -- json_config/json_config.sh@132 -- # break 00:05:14.571 19:14:12 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:14.571 19:14:12 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:14.571 SPDK target shutdown done 00:05:14.571 19:14:12 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:14.571 INFO: relaunching applications... 00:05:14.571 19:14:12 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.571 19:14:12 -- json_config/json_config.sh@98 -- # local app=target 00:05:14.571 19:14:12 -- json_config/json_config.sh@99 -- # shift 00:05:14.571 19:14:12 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:14.571 19:14:12 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:14.571 19:14:12 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:14.571 19:14:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:14.571 19:14:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:14.571 19:14:12 -- json_config/json_config.sh@111 -- # app_pid[$app]=1074556 00:05:14.571 19:14:12 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.571 19:14:12 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:14.571 Waiting for target to run... 00:05:14.571 19:14:12 -- json_config/json_config.sh@114 -- # waitforlisten 1074556 /var/tmp/spdk_tgt.sock 00:05:14.571 19:14:12 -- common/autotest_common.sh@829 -- # '[' -z 1074556 ']' 00:05:14.571 19:14:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.571 19:14:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.571 19:14:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.571 19:14:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.571 19:14:12 -- common/autotest_common.sh@10 -- # set +x 00:05:14.571 [2024-11-17 19:14:12.632460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:14.571 [2024-11-17 19:14:12.632573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074556 ] 00:05:14.571 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.139 [2024-11-17 19:14:13.164606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.139 [2024-11-17 19:14:13.236958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.139 [2024-11-17 19:14:13.237141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.422 [2024-11-17 19:14:16.256712] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.422 [2024-11-17 19:14:16.289158] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.422 19:14:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.422 19:14:16 -- common/autotest_common.sh@862 -- # return 0 00:05:18.422 19:14:16 -- json_config/json_config.sh@115 -- # echo '' 00:05:18.422 00:05:18.422 19:14:16 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:18.422 19:14:16 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:18.422 INFO: Checking if target configuration is the same... 00:05:18.422 19:14:16 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.422 19:14:16 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:18.422 19:14:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.422 + '[' 2 -ne 2 ']' 00:05:18.422 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.422 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.422 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.422 +++ basename /dev/fd/62 00:05:18.422 ++ mktemp /tmp/62.XXX 00:05:18.422 + tmp_file_1=/tmp/62.VeR 00:05:18.422 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.422 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.422 + tmp_file_2=/tmp/spdk_tgt_config.json.fyv 00:05:18.422 + ret=0 00:05:18.422 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.681 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.681 + diff -u /tmp/62.VeR /tmp/spdk_tgt_config.json.fyv 00:05:18.939 + echo 'INFO: JSON config files are the same' 00:05:18.939 INFO: JSON config files are the same 00:05:18.939 + rm /tmp/62.VeR /tmp/spdk_tgt_config.json.fyv 00:05:18.939 + exit 0 00:05:18.939 19:14:16 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:18.939 19:14:16 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.939 INFO: changing configuration and checking if this can be detected... 00:05:18.939 19:14:16 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.940 19:14:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.940 19:14:17 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.940 19:14:17 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:18.940 19:14:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.198 + '[' 2 -ne 2 ']' 00:05:19.198 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.198 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:19.198 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.198 +++ basename /dev/fd/62 00:05:19.198 ++ mktemp /tmp/62.XXX 00:05:19.198 + tmp_file_1=/tmp/62.BDs 00:05:19.198 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.198 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.198 + tmp_file_2=/tmp/spdk_tgt_config.json.JFW 00:05:19.198 + ret=0 00:05:19.198 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.456 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.456 + diff -u /tmp/62.BDs /tmp/spdk_tgt_config.json.JFW 00:05:19.456 + ret=1 00:05:19.456 + echo '=== Start of file: /tmp/62.BDs ===' 00:05:19.456 + cat /tmp/62.BDs 00:05:19.456 + echo '=== End of file: /tmp/62.BDs ===' 00:05:19.456 + echo '' 00:05:19.456 + echo '=== Start of file: /tmp/spdk_tgt_config.json.JFW ===' 00:05:19.456 + cat /tmp/spdk_tgt_config.json.JFW 00:05:19.456 + echo '=== End of file: /tmp/spdk_tgt_config.json.JFW ===' 00:05:19.456 + echo '' 00:05:19.456 + rm /tmp/62.BDs /tmp/spdk_tgt_config.json.JFW 00:05:19.456 + exit 1 00:05:19.456 19:14:17 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:19.456 INFO: configuration change detected. 00:05:19.456 19:14:17 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:19.456 19:14:17 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:19.456 19:14:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.456 19:14:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.456 19:14:17 -- json_config/json_config.sh@360 -- # local ret=0 00:05:19.456 19:14:17 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:19.456 19:14:17 -- json_config/json_config.sh@370 -- # [[ -n 1074556 ]] 00:05:19.456 19:14:17 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:19.456 19:14:17 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:19.456 19:14:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.456 19:14:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.456 19:14:17 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:19.456 19:14:17 -- json_config/json_config.sh@246 -- # uname -s 00:05:19.456 19:14:17 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:19.456 19:14:17 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:19.456 19:14:17 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:19.456 19:14:17 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:19.456 19:14:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.456 19:14:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.456 19:14:17 -- json_config/json_config.sh@376 -- # killprocess 1074556 00:05:19.456 19:14:17 -- common/autotest_common.sh@936 -- # '[' -z 1074556 ']' 00:05:19.456 19:14:17 -- common/autotest_common.sh@940 -- # kill -0 1074556 00:05:19.456 19:14:17 -- common/autotest_common.sh@941 -- # uname 00:05:19.456 19:14:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.456 19:14:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1074556 00:05:19.456 19:14:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.456 19:14:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.456 19:14:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1074556' 00:05:19.456 killing process with pid 1074556 00:05:19.456 19:14:17 -- common/autotest_common.sh@955 -- # kill 1074556 00:05:19.456 19:14:17 -- common/autotest_common.sh@960 -- # wait 1074556 00:05:21.357 19:14:19 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.357 19:14:19 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:21.357 19:14:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.357 19:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.357 19:14:19 -- json_config/json_config.sh@381 -- # return 0 00:05:21.357 19:14:19 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:21.357 INFO: Success 00:05:21.357 00:05:21.357 real 0m16.297s 00:05:21.357 user 0m18.482s 00:05:21.357 sys 0m2.314s 00:05:21.357 19:14:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.357 19:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.357 ************************************ 00:05:21.357 END TEST json_config 00:05:21.357 ************************************ 00:05:21.357 19:14:19 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.357 19:14:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.357 19:14:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.357 19:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.357 ************************************ 00:05:21.357 START TEST json_config_extra_key 00:05:21.357 ************************************ 00:05:21.357 19:14:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.357 19:14:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.357 19:14:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.357 19:14:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.357 19:14:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.357 19:14:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.357 19:14:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.357 19:14:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.357 19:14:19 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.357 19:14:19 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.357 19:14:19 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.357 19:14:19 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.357 19:14:19 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.357 19:14:19 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.357 19:14:19 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.357 19:14:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.357 19:14:19 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.357 19:14:19 -- scripts/common.sh@344 -- # : 1 00:05:21.357 19:14:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.357 19:14:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.357 19:14:19 -- scripts/common.sh@364 -- # decimal 1 00:05:21.357 19:14:19 -- scripts/common.sh@352 -- # local d=1 00:05:21.357 19:14:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.357 19:14:19 -- scripts/common.sh@354 -- # echo 1 00:05:21.357 19:14:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.357 19:14:19 -- scripts/common.sh@365 -- # decimal 2 00:05:21.357 19:14:19 -- scripts/common.sh@352 -- # local d=2 00:05:21.357 19:14:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.357 19:14:19 -- scripts/common.sh@354 -- # echo 2 00:05:21.357 19:14:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.357 19:14:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.357 19:14:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.357 19:14:19 -- scripts/common.sh@367 -- # return 0 00:05:21.357 19:14:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.357 19:14:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.357 --rc genhtml_branch_coverage=1 00:05:21.357 --rc genhtml_function_coverage=1 00:05:21.357 --rc genhtml_legend=1 00:05:21.357 --rc geninfo_all_blocks=1 00:05:21.357 --rc geninfo_unexecuted_blocks=1 00:05:21.357 00:05:21.357 ' 00:05:21.357 19:14:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.357 --rc genhtml_branch_coverage=1 00:05:21.357 --rc genhtml_function_coverage=1 00:05:21.357 --rc genhtml_legend=1 00:05:21.357 --rc geninfo_all_blocks=1 00:05:21.357 --rc geninfo_unexecuted_blocks=1 00:05:21.357 00:05:21.357 ' 00:05:21.357 19:14:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.358 --rc genhtml_branch_coverage=1 00:05:21.358 --rc genhtml_function_coverage=1 00:05:21.358 --rc genhtml_legend=1 00:05:21.358 --rc geninfo_all_blocks=1 00:05:21.358 --rc geninfo_unexecuted_blocks=1 00:05:21.358 00:05:21.358 ' 00:05:21.358 19:14:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.358 --rc genhtml_branch_coverage=1 00:05:21.358 --rc genhtml_function_coverage=1 00:05:21.358 --rc genhtml_legend=1 00:05:21.358 --rc geninfo_all_blocks=1 00:05:21.358 --rc geninfo_unexecuted_blocks=1 00:05:21.358 00:05:21.358 ' 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.358 19:14:19 -- nvmf/common.sh@7 -- # uname -s 00:05:21.358 19:14:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.358 19:14:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.358 19:14:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.358 19:14:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.358 19:14:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.358 19:14:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.358 19:14:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.358 19:14:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.358 19:14:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.358 19:14:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.358 19:14:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:21.358 19:14:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:21.358 19:14:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.358 19:14:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.358 19:14:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.358 19:14:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.358 19:14:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.358 19:14:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.358 19:14:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.358 19:14:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.358 19:14:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.358 19:14:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.358 19:14:19 -- paths/export.sh@5 -- # export PATH 00:05:21.358 19:14:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.358 19:14:19 -- nvmf/common.sh@46 -- # : 0 00:05:21.358 19:14:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:21.358 19:14:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:21.358 19:14:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:21.358 19:14:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.358 19:14:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.358 19:14:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:21.358 19:14:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:21.358 19:14:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:21.358 INFO: launching applications... 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1075506 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:21.358 Waiting for target to run... 00:05:21.358 19:14:19 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1075506 /var/tmp/spdk_tgt.sock 00:05:21.358 19:14:19 -- common/autotest_common.sh@829 -- # '[' -z 1075506 ']' 00:05:21.358 19:14:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.358 19:14:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.358 19:14:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.358 19:14:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.358 19:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.358 [2024-11-17 19:14:19.553515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:21.358 [2024-11-17 19:14:19.553614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075506 ] 00:05:21.358 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.927 [2024-11-17 19:14:20.060600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.927 [2024-11-17 19:14:20.138528] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.927 [2024-11-17 19:14:20.138718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.493 19:14:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.493 19:14:20 -- common/autotest_common.sh@862 -- # return 0 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:22.493 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:22.493 INFO: shutting down applications... 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1075506 ]] 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1075506 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1075506 00:05:22.493 19:14:20 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:22.752 19:14:21 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:22.752 19:14:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:22.752 19:14:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1075506 00:05:22.752 19:14:21 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:22.752 19:14:21 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:22.752 19:14:21 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:22.752 19:14:21 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:22.752 SPDK target shutdown done 00:05:22.752 19:14:21 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:22.752 Success 00:05:22.752 00:05:22.752 real 0m1.647s 00:05:22.752 user 0m1.432s 00:05:22.752 sys 0m0.610s 00:05:22.752 19:14:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.753 19:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.753 ************************************ 00:05:22.753 END TEST json_config_extra_key 00:05:22.753 ************************************ 00:05:23.012 19:14:21 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.012 19:14:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.012 19:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.012 19:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:23.012 ************************************ 00:05:23.012 START TEST alias_rpc 00:05:23.012 ************************************ 00:05:23.012 19:14:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.012 * Looking for test storage... 00:05:23.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:23.012 19:14:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:23.012 19:14:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:23.012 19:14:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:23.012 19:14:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:23.012 19:14:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:23.012 19:14:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:23.012 19:14:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:23.012 19:14:21 -- scripts/common.sh@335 -- # IFS=.-: 00:05:23.012 19:14:21 -- scripts/common.sh@335 -- # read -ra ver1 00:05:23.012 19:14:21 -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.012 19:14:21 -- scripts/common.sh@336 -- # read -ra ver2 00:05:23.012 19:14:21 -- scripts/common.sh@337 -- # local 'op=<' 00:05:23.012 19:14:21 -- scripts/common.sh@339 -- # ver1_l=2 00:05:23.012 19:14:21 -- scripts/common.sh@340 -- # ver2_l=1 00:05:23.012 19:14:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:23.012 19:14:21 -- scripts/common.sh@343 -- # case "$op" in 00:05:23.012 19:14:21 -- scripts/common.sh@344 -- # : 1 00:05:23.012 19:14:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:23.012 19:14:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.012 19:14:21 -- scripts/common.sh@364 -- # decimal 1 00:05:23.012 19:14:21 -- scripts/common.sh@352 -- # local d=1 00:05:23.012 19:14:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.012 19:14:21 -- scripts/common.sh@354 -- # echo 1 00:05:23.012 19:14:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:23.012 19:14:21 -- scripts/common.sh@365 -- # decimal 2 00:05:23.012 19:14:21 -- scripts/common.sh@352 -- # local d=2 00:05:23.012 19:14:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.012 19:14:21 -- scripts/common.sh@354 -- # echo 2 00:05:23.012 19:14:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:23.012 19:14:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:23.012 19:14:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:23.012 19:14:21 -- scripts/common.sh@367 -- # return 0 00:05:23.012 19:14:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.012 19:14:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:23.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.012 --rc genhtml_branch_coverage=1 00:05:23.012 --rc genhtml_function_coverage=1 00:05:23.012 --rc genhtml_legend=1 00:05:23.012 --rc geninfo_all_blocks=1 00:05:23.012 --rc geninfo_unexecuted_blocks=1 00:05:23.012 00:05:23.012 ' 00:05:23.012 19:14:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:23.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.012 --rc genhtml_branch_coverage=1 00:05:23.012 --rc genhtml_function_coverage=1 00:05:23.012 --rc genhtml_legend=1 00:05:23.012 --rc geninfo_all_blocks=1 00:05:23.012 --rc geninfo_unexecuted_blocks=1 00:05:23.012 00:05:23.012 ' 00:05:23.012 19:14:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:23.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.012 --rc genhtml_branch_coverage=1 00:05:23.012 --rc genhtml_function_coverage=1 00:05:23.012 --rc genhtml_legend=1 00:05:23.012 --rc geninfo_all_blocks=1 00:05:23.012 --rc geninfo_unexecuted_blocks=1 00:05:23.012 00:05:23.012 ' 00:05:23.012 19:14:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:23.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.012 --rc genhtml_branch_coverage=1 00:05:23.012 --rc genhtml_function_coverage=1 00:05:23.012 --rc genhtml_legend=1 00:05:23.012 --rc geninfo_all_blocks=1 00:05:23.012 --rc geninfo_unexecuted_blocks=1 00:05:23.012 00:05:23.012 ' 00:05:23.012 19:14:21 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.012 19:14:21 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1075702 00:05:23.012 19:14:21 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.012 19:14:21 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1075702 00:05:23.012 19:14:21 -- common/autotest_common.sh@829 -- # '[' -z 1075702 ']' 00:05:23.012 19:14:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.012 19:14:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.012 19:14:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.012 19:14:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.012 19:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:23.012 [2024-11-17 19:14:21.234430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:23.012 [2024-11-17 19:14:21.234515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075702 ] 00:05:23.012 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.271 [2024-11-17 19:14:21.293396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.271 [2024-11-17 19:14:21.382963] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.271 [2024-11-17 19:14:21.383127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.205 19:14:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.205 19:14:22 -- common/autotest_common.sh@862 -- # return 0 00:05:24.205 19:14:22 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:24.205 19:14:22 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1075702 00:05:24.205 19:14:22 -- common/autotest_common.sh@936 -- # '[' -z 1075702 ']' 00:05:24.205 19:14:22 -- common/autotest_common.sh@940 -- # kill -0 1075702 00:05:24.205 19:14:22 -- common/autotest_common.sh@941 -- # uname 00:05:24.205 19:14:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.205 19:14:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1075702 00:05:24.463 19:14:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.463 19:14:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.463 19:14:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1075702' 00:05:24.463 killing process with pid 1075702 00:05:24.463 19:14:22 -- common/autotest_common.sh@955 -- # kill 1075702 00:05:24.463 19:14:22 -- common/autotest_common.sh@960 -- # wait 1075702 00:05:24.722 00:05:24.722 real 0m1.846s 00:05:24.722 user 0m2.135s 00:05:24.722 sys 0m0.476s 00:05:24.722 19:14:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.722 19:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.722 ************************************ 00:05:24.722 END TEST alias_rpc 00:05:24.722 ************************************ 00:05:24.722 19:14:22 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:24.722 19:14:22 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.722 19:14:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.722 19:14:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.722 19:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.722 ************************************ 00:05:24.722 START TEST spdkcli_tcp 00:05:24.722 ************************************ 00:05:24.722 19:14:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.722 * Looking for test storage... 00:05:24.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:24.722 19:14:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:24.722 19:14:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:24.722 19:14:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:24.981 19:14:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:24.981 19:14:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:24.981 19:14:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:24.981 19:14:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:24.981 19:14:23 -- scripts/common.sh@335 -- # IFS=.-: 00:05:24.981 19:14:23 -- scripts/common.sh@335 -- # read -ra ver1 00:05:24.981 19:14:23 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.981 19:14:23 -- scripts/common.sh@336 -- # read -ra ver2 00:05:24.981 19:14:23 -- scripts/common.sh@337 -- # local 'op=<' 00:05:24.981 19:14:23 -- scripts/common.sh@339 -- # ver1_l=2 00:05:24.981 19:14:23 -- scripts/common.sh@340 -- # ver2_l=1 00:05:24.981 19:14:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:24.981 19:14:23 -- scripts/common.sh@343 -- # case "$op" in 00:05:24.981 19:14:23 -- scripts/common.sh@344 -- # : 1 00:05:24.981 19:14:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:24.981 19:14:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.981 19:14:23 -- scripts/common.sh@364 -- # decimal 1 00:05:24.981 19:14:23 -- scripts/common.sh@352 -- # local d=1 00:05:24.981 19:14:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.981 19:14:23 -- scripts/common.sh@354 -- # echo 1 00:05:24.981 19:14:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:24.981 19:14:23 -- scripts/common.sh@365 -- # decimal 2 00:05:24.981 19:14:23 -- scripts/common.sh@352 -- # local d=2 00:05:24.981 19:14:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.981 19:14:23 -- scripts/common.sh@354 -- # echo 2 00:05:24.981 19:14:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:24.981 19:14:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:24.981 19:14:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:24.981 19:14:23 -- scripts/common.sh@367 -- # return 0 00:05:24.981 19:14:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.981 19:14:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:24.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.981 --rc genhtml_branch_coverage=1 00:05:24.981 --rc genhtml_function_coverage=1 00:05:24.981 --rc genhtml_legend=1 00:05:24.981 --rc geninfo_all_blocks=1 00:05:24.981 --rc geninfo_unexecuted_blocks=1 00:05:24.981 00:05:24.981 ' 00:05:24.981 19:14:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:24.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.981 --rc genhtml_branch_coverage=1 00:05:24.981 --rc genhtml_function_coverage=1 00:05:24.981 --rc genhtml_legend=1 00:05:24.981 --rc geninfo_all_blocks=1 00:05:24.981 --rc geninfo_unexecuted_blocks=1 00:05:24.981 00:05:24.981 ' 00:05:24.981 19:14:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:24.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.981 --rc genhtml_branch_coverage=1 00:05:24.981 --rc genhtml_function_coverage=1 00:05:24.981 --rc genhtml_legend=1 00:05:24.981 --rc geninfo_all_blocks=1 00:05:24.981 --rc geninfo_unexecuted_blocks=1 00:05:24.981 00:05:24.981 ' 00:05:24.981 19:14:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:24.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.981 --rc genhtml_branch_coverage=1 00:05:24.981 --rc genhtml_function_coverage=1 00:05:24.981 --rc genhtml_legend=1 00:05:24.981 --rc geninfo_all_blocks=1 00:05:24.981 --rc geninfo_unexecuted_blocks=1 00:05:24.981 00:05:24.981 ' 00:05:24.981 19:14:23 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:24.981 19:14:23 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:24.981 19:14:23 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:24.981 19:14:23 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:24.981 19:14:23 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:24.981 19:14:23 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:24.981 19:14:23 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:24.981 19:14:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.981 19:14:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.981 19:14:23 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1076031 00:05:24.981 19:14:23 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:24.981 19:14:23 -- spdkcli/tcp.sh@27 -- # waitforlisten 1076031 00:05:24.981 19:14:23 -- common/autotest_common.sh@829 -- # '[' -z 1076031 ']' 00:05:24.981 19:14:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.981 19:14:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.981 19:14:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.981 19:14:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.981 19:14:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.981 [2024-11-17 19:14:23.121141] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:24.981 [2024-11-17 19:14:23.121219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076031 ] 00:05:24.981 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.981 [2024-11-17 19:14:23.177866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.240 [2024-11-17 19:14:23.267016] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.240 [2024-11-17 19:14:23.267186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.240 [2024-11-17 19:14:23.267191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.173 19:14:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.173 19:14:24 -- common/autotest_common.sh@862 -- # return 0 00:05:26.173 19:14:24 -- spdkcli/tcp.sh@31 -- # socat_pid=1076172 00:05:26.173 19:14:24 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.173 19:14:24 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.173 [ 00:05:26.173 "bdev_malloc_delete", 00:05:26.173 "bdev_malloc_create", 00:05:26.173 "bdev_null_resize", 00:05:26.173 "bdev_null_delete", 00:05:26.173 "bdev_null_create", 00:05:26.173 "bdev_nvme_cuse_unregister", 00:05:26.173 "bdev_nvme_cuse_register", 00:05:26.173 "bdev_opal_new_user", 00:05:26.173 "bdev_opal_set_lock_state", 00:05:26.173 "bdev_opal_delete", 00:05:26.173 "bdev_opal_get_info", 00:05:26.173 "bdev_opal_create", 00:05:26.173 "bdev_nvme_opal_revert", 00:05:26.173 "bdev_nvme_opal_init", 00:05:26.173 "bdev_nvme_send_cmd", 00:05:26.173 "bdev_nvme_get_path_iostat", 00:05:26.173 "bdev_nvme_get_mdns_discovery_info", 00:05:26.173 "bdev_nvme_stop_mdns_discovery", 00:05:26.173 "bdev_nvme_start_mdns_discovery", 00:05:26.173 "bdev_nvme_set_multipath_policy", 00:05:26.173 "bdev_nvme_set_preferred_path", 00:05:26.173 "bdev_nvme_get_io_paths", 00:05:26.173 "bdev_nvme_remove_error_injection", 00:05:26.173 "bdev_nvme_add_error_injection", 00:05:26.173 "bdev_nvme_get_discovery_info", 00:05:26.173 "bdev_nvme_stop_discovery", 00:05:26.173 "bdev_nvme_start_discovery", 00:05:26.173 "bdev_nvme_get_controller_health_info", 00:05:26.173 "bdev_nvme_disable_controller", 00:05:26.173 "bdev_nvme_enable_controller", 00:05:26.173 "bdev_nvme_reset_controller", 00:05:26.173 "bdev_nvme_get_transport_statistics", 00:05:26.173 "bdev_nvme_apply_firmware", 00:05:26.173 "bdev_nvme_detach_controller", 00:05:26.173 "bdev_nvme_get_controllers", 00:05:26.174 "bdev_nvme_attach_controller", 00:05:26.174 "bdev_nvme_set_hotplug", 00:05:26.174 "bdev_nvme_set_options", 00:05:26.174 "bdev_passthru_delete", 00:05:26.174 "bdev_passthru_create", 00:05:26.174 "bdev_lvol_grow_lvstore", 00:05:26.174 "bdev_lvol_get_lvols", 00:05:26.174 "bdev_lvol_get_lvstores", 00:05:26.174 "bdev_lvol_delete", 00:05:26.174 "bdev_lvol_set_read_only", 00:05:26.174 "bdev_lvol_resize", 00:05:26.174 "bdev_lvol_decouple_parent", 00:05:26.174 "bdev_lvol_inflate", 00:05:26.174 "bdev_lvol_rename", 00:05:26.174 "bdev_lvol_clone_bdev", 00:05:26.174 "bdev_lvol_clone", 00:05:26.174 "bdev_lvol_snapshot", 00:05:26.174 "bdev_lvol_create", 00:05:26.174 "bdev_lvol_delete_lvstore", 00:05:26.174 "bdev_lvol_rename_lvstore", 00:05:26.174 "bdev_lvol_create_lvstore", 00:05:26.174 "bdev_raid_set_options", 00:05:26.174 "bdev_raid_remove_base_bdev", 00:05:26.174 "bdev_raid_add_base_bdev", 00:05:26.174 "bdev_raid_delete", 00:05:26.174 "bdev_raid_create", 00:05:26.174 "bdev_raid_get_bdevs", 00:05:26.174 "bdev_error_inject_error", 00:05:26.174 "bdev_error_delete", 00:05:26.174 "bdev_error_create", 00:05:26.174 "bdev_split_delete", 00:05:26.174 "bdev_split_create", 00:05:26.174 "bdev_delay_delete", 00:05:26.174 "bdev_delay_create", 00:05:26.174 "bdev_delay_update_latency", 00:05:26.174 "bdev_zone_block_delete", 00:05:26.174 "bdev_zone_block_create", 00:05:26.174 "blobfs_create", 00:05:26.174 "blobfs_detect", 00:05:26.174 "blobfs_set_cache_size", 00:05:26.174 "bdev_aio_delete", 00:05:26.174 "bdev_aio_rescan", 00:05:26.174 "bdev_aio_create", 00:05:26.174 "bdev_ftl_set_property", 00:05:26.174 "bdev_ftl_get_properties", 00:05:26.174 "bdev_ftl_get_stats", 00:05:26.174 "bdev_ftl_unmap", 00:05:26.174 "bdev_ftl_unload", 00:05:26.174 "bdev_ftl_delete", 00:05:26.174 "bdev_ftl_load", 00:05:26.174 "bdev_ftl_create", 00:05:26.174 "bdev_virtio_attach_controller", 00:05:26.174 "bdev_virtio_scsi_get_devices", 00:05:26.174 "bdev_virtio_detach_controller", 00:05:26.174 "bdev_virtio_blk_set_hotplug", 00:05:26.174 "bdev_iscsi_delete", 00:05:26.174 "bdev_iscsi_create", 00:05:26.174 "bdev_iscsi_set_options", 00:05:26.174 "accel_error_inject_error", 00:05:26.174 "ioat_scan_accel_module", 00:05:26.174 "dsa_scan_accel_module", 00:05:26.174 "iaa_scan_accel_module", 00:05:26.174 "vfu_virtio_create_scsi_endpoint", 00:05:26.174 "vfu_virtio_scsi_remove_target", 00:05:26.174 "vfu_virtio_scsi_add_target", 00:05:26.174 "vfu_virtio_create_blk_endpoint", 00:05:26.174 "vfu_virtio_delete_endpoint", 00:05:26.174 "iscsi_set_options", 00:05:26.174 "iscsi_get_auth_groups", 00:05:26.174 "iscsi_auth_group_remove_secret", 00:05:26.174 "iscsi_auth_group_add_secret", 00:05:26.174 "iscsi_delete_auth_group", 00:05:26.174 "iscsi_create_auth_group", 00:05:26.174 "iscsi_set_discovery_auth", 00:05:26.174 "iscsi_get_options", 00:05:26.174 "iscsi_target_node_request_logout", 00:05:26.174 "iscsi_target_node_set_redirect", 00:05:26.174 "iscsi_target_node_set_auth", 00:05:26.174 "iscsi_target_node_add_lun", 00:05:26.174 "iscsi_get_connections", 00:05:26.174 "iscsi_portal_group_set_auth", 00:05:26.174 "iscsi_start_portal_group", 00:05:26.174 "iscsi_delete_portal_group", 00:05:26.174 "iscsi_create_portal_group", 00:05:26.174 "iscsi_get_portal_groups", 00:05:26.174 "iscsi_delete_target_node", 00:05:26.174 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.174 "iscsi_target_node_add_pg_ig_maps", 00:05:26.174 "iscsi_create_target_node", 00:05:26.174 "iscsi_get_target_nodes", 00:05:26.174 "iscsi_delete_initiator_group", 00:05:26.174 "iscsi_initiator_group_remove_initiators", 00:05:26.174 "iscsi_initiator_group_add_initiators", 00:05:26.174 "iscsi_create_initiator_group", 00:05:26.174 "iscsi_get_initiator_groups", 00:05:26.174 "nvmf_set_crdt", 00:05:26.174 "nvmf_set_config", 00:05:26.174 "nvmf_set_max_subsystems", 00:05:26.174 "nvmf_subsystem_get_listeners", 00:05:26.174 "nvmf_subsystem_get_qpairs", 00:05:26.174 "nvmf_subsystem_get_controllers", 00:05:26.174 "nvmf_get_stats", 00:05:26.174 "nvmf_get_transports", 00:05:26.174 "nvmf_create_transport", 00:05:26.174 "nvmf_get_targets", 00:05:26.174 "nvmf_delete_target", 00:05:26.174 "nvmf_create_target", 00:05:26.174 "nvmf_subsystem_allow_any_host", 00:05:26.174 "nvmf_subsystem_remove_host", 00:05:26.174 "nvmf_subsystem_add_host", 00:05:26.174 "nvmf_subsystem_remove_ns", 00:05:26.174 "nvmf_subsystem_add_ns", 00:05:26.174 "nvmf_subsystem_listener_set_ana_state", 00:05:26.174 "nvmf_discovery_get_referrals", 00:05:26.174 "nvmf_discovery_remove_referral", 00:05:26.174 "nvmf_discovery_add_referral", 00:05:26.174 "nvmf_subsystem_remove_listener", 00:05:26.174 "nvmf_subsystem_add_listener", 00:05:26.174 "nvmf_delete_subsystem", 00:05:26.174 "nvmf_create_subsystem", 00:05:26.174 "nvmf_get_subsystems", 00:05:26.174 "env_dpdk_get_mem_stats", 00:05:26.174 "nbd_get_disks", 00:05:26.174 "nbd_stop_disk", 00:05:26.174 "nbd_start_disk", 00:05:26.174 "ublk_recover_disk", 00:05:26.174 "ublk_get_disks", 00:05:26.174 "ublk_stop_disk", 00:05:26.174 "ublk_start_disk", 00:05:26.174 "ublk_destroy_target", 00:05:26.174 "ublk_create_target", 00:05:26.174 "virtio_blk_create_transport", 00:05:26.174 "virtio_blk_get_transports", 00:05:26.174 "vhost_controller_set_coalescing", 00:05:26.174 "vhost_get_controllers", 00:05:26.174 "vhost_delete_controller", 00:05:26.174 "vhost_create_blk_controller", 00:05:26.174 "vhost_scsi_controller_remove_target", 00:05:26.174 "vhost_scsi_controller_add_target", 00:05:26.174 "vhost_start_scsi_controller", 00:05:26.174 "vhost_create_scsi_controller", 00:05:26.174 "thread_set_cpumask", 00:05:26.174 "framework_get_scheduler", 00:05:26.174 "framework_set_scheduler", 00:05:26.174 "framework_get_reactors", 00:05:26.174 "thread_get_io_channels", 00:05:26.174 "thread_get_pollers", 00:05:26.174 "thread_get_stats", 00:05:26.174 "framework_monitor_context_switch", 00:05:26.174 "spdk_kill_instance", 00:05:26.174 "log_enable_timestamps", 00:05:26.174 "log_get_flags", 00:05:26.174 "log_clear_flag", 00:05:26.174 "log_set_flag", 00:05:26.174 "log_get_level", 00:05:26.174 "log_set_level", 00:05:26.174 "log_get_print_level", 00:05:26.174 "log_set_print_level", 00:05:26.174 "framework_enable_cpumask_locks", 00:05:26.174 "framework_disable_cpumask_locks", 00:05:26.174 "framework_wait_init", 00:05:26.174 "framework_start_init", 00:05:26.174 "scsi_get_devices", 00:05:26.174 "bdev_get_histogram", 00:05:26.174 "bdev_enable_histogram", 00:05:26.174 "bdev_set_qos_limit", 00:05:26.174 "bdev_set_qd_sampling_period", 00:05:26.174 "bdev_get_bdevs", 00:05:26.174 "bdev_reset_iostat", 00:05:26.174 "bdev_get_iostat", 00:05:26.174 "bdev_examine", 00:05:26.174 "bdev_wait_for_examine", 00:05:26.174 "bdev_set_options", 00:05:26.174 "notify_get_notifications", 00:05:26.174 "notify_get_types", 00:05:26.174 "accel_get_stats", 00:05:26.174 "accel_set_options", 00:05:26.174 "accel_set_driver", 00:05:26.174 "accel_crypto_key_destroy", 00:05:26.174 "accel_crypto_keys_get", 00:05:26.174 "accel_crypto_key_create", 00:05:26.174 "accel_assign_opc", 00:05:26.174 "accel_get_module_info", 00:05:26.174 "accel_get_opc_assignments", 00:05:26.174 "vmd_rescan", 00:05:26.174 "vmd_remove_device", 00:05:26.174 "vmd_enable", 00:05:26.174 "sock_set_default_impl", 00:05:26.174 "sock_impl_set_options", 00:05:26.174 "sock_impl_get_options", 00:05:26.174 "iobuf_get_stats", 00:05:26.174 "iobuf_set_options", 00:05:26.174 "framework_get_pci_devices", 00:05:26.174 "framework_get_config", 00:05:26.174 "framework_get_subsystems", 00:05:26.174 "vfu_tgt_set_base_path", 00:05:26.174 "trace_get_info", 00:05:26.174 "trace_get_tpoint_group_mask", 00:05:26.174 "trace_disable_tpoint_group", 00:05:26.174 "trace_enable_tpoint_group", 00:05:26.174 "trace_clear_tpoint_mask", 00:05:26.174 "trace_set_tpoint_mask", 00:05:26.174 "spdk_get_version", 00:05:26.174 "rpc_get_methods" 00:05:26.174 ] 00:05:26.174 19:14:24 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.174 19:14:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.174 19:14:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.174 19:14:24 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.174 19:14:24 -- spdkcli/tcp.sh@38 -- # killprocess 1076031 00:05:26.174 19:14:24 -- common/autotest_common.sh@936 -- # '[' -z 1076031 ']' 00:05:26.174 19:14:24 -- common/autotest_common.sh@940 -- # kill -0 1076031 00:05:26.174 19:14:24 -- common/autotest_common.sh@941 -- # uname 00:05:26.174 19:14:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.174 19:14:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1076031 00:05:26.174 19:14:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.174 19:14:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.174 19:14:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1076031' 00:05:26.174 killing process with pid 1076031 00:05:26.174 19:14:24 -- common/autotest_common.sh@955 -- # kill 1076031 00:05:26.174 19:14:24 -- common/autotest_common.sh@960 -- # wait 1076031 00:05:26.741 00:05:26.741 real 0m1.852s 00:05:26.741 user 0m3.573s 00:05:26.741 sys 0m0.495s 00:05:26.741 19:14:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.741 19:14:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.741 ************************************ 00:05:26.741 END TEST spdkcli_tcp 00:05:26.741 ************************************ 00:05:26.741 19:14:24 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.741 19:14:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.741 19:14:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.741 19:14:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.742 ************************************ 00:05:26.742 START TEST dpdk_mem_utility 00:05:26.742 ************************************ 00:05:26.742 19:14:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.742 * Looking for test storage... 00:05:26.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:26.742 19:14:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.742 19:14:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.742 19:14:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.742 19:14:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.742 19:14:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.742 19:14:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.742 19:14:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.742 19:14:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.742 19:14:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.742 19:14:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.742 19:14:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.742 19:14:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.742 19:14:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.742 19:14:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.742 19:14:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.742 19:14:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.742 19:14:24 -- scripts/common.sh@344 -- # : 1 00:05:26.742 19:14:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.742 19:14:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.742 19:14:24 -- scripts/common.sh@364 -- # decimal 1 00:05:26.742 19:14:24 -- scripts/common.sh@352 -- # local d=1 00:05:26.742 19:14:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.742 19:14:24 -- scripts/common.sh@354 -- # echo 1 00:05:26.742 19:14:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.742 19:14:24 -- scripts/common.sh@365 -- # decimal 2 00:05:26.742 19:14:24 -- scripts/common.sh@352 -- # local d=2 00:05:26.742 19:14:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.742 19:14:24 -- scripts/common.sh@354 -- # echo 2 00:05:26.742 19:14:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.742 19:14:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.742 19:14:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.742 19:14:24 -- scripts/common.sh@367 -- # return 0 00:05:26.742 19:14:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.742 19:14:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.742 --rc genhtml_branch_coverage=1 00:05:26.742 --rc genhtml_function_coverage=1 00:05:26.742 --rc genhtml_legend=1 00:05:26.742 --rc geninfo_all_blocks=1 00:05:26.742 --rc geninfo_unexecuted_blocks=1 00:05:26.742 00:05:26.742 ' 00:05:26.742 19:14:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.742 --rc genhtml_branch_coverage=1 00:05:26.742 --rc genhtml_function_coverage=1 00:05:26.742 --rc genhtml_legend=1 00:05:26.742 --rc geninfo_all_blocks=1 00:05:26.742 --rc geninfo_unexecuted_blocks=1 00:05:26.742 00:05:26.742 ' 00:05:26.742 19:14:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.742 --rc genhtml_branch_coverage=1 00:05:26.742 --rc genhtml_function_coverage=1 00:05:26.742 --rc genhtml_legend=1 00:05:26.742 --rc geninfo_all_blocks=1 00:05:26.742 --rc geninfo_unexecuted_blocks=1 00:05:26.742 00:05:26.742 ' 00:05:26.742 19:14:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.742 --rc genhtml_branch_coverage=1 00:05:26.742 --rc genhtml_function_coverage=1 00:05:26.742 --rc genhtml_legend=1 00:05:26.742 --rc geninfo_all_blocks=1 00:05:26.742 --rc geninfo_unexecuted_blocks=1 00:05:26.742 00:05:26.742 ' 00:05:26.742 19:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.742 19:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1076372 00:05:26.742 19:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.742 19:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1076372 00:05:26.742 19:14:24 -- common/autotest_common.sh@829 -- # '[' -z 1076372 ']' 00:05:26.742 19:14:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.742 19:14:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.742 19:14:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.742 19:14:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.742 19:14:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.742 [2024-11-17 19:14:24.986006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:26.742 [2024-11-17 19:14:24.986098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076372 ] 00:05:27.001 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.001 [2024-11-17 19:14:25.044635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.001 [2024-11-17 19:14:25.130753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.001 [2024-11-17 19:14:25.130905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.935 19:14:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.935 19:14:25 -- common/autotest_common.sh@862 -- # return 0 00:05:27.935 19:14:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.935 19:14:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.935 19:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.935 19:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:27.935 { 00:05:27.935 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.936 } 00:05:27.936 19:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.936 19:14:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.936 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:27.936 1 heaps totaling size 814.000000 MiB 00:05:27.936 size: 814.000000 MiB heap id: 0 00:05:27.936 end heaps---------- 00:05:27.936 8 mempools totaling size 598.116089 MiB 00:05:27.936 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.936 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.936 size: 84.521057 MiB name: bdev_io_1076372 00:05:27.936 size: 51.011292 MiB name: evtpool_1076372 00:05:27.936 size: 50.003479 MiB name: msgpool_1076372 00:05:27.936 size: 21.763794 MiB name: PDU_Pool 00:05:27.936 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.936 size: 0.026123 MiB name: Session_Pool 00:05:27.936 end mempools------- 00:05:27.936 6 memzones totaling size 4.142822 MiB 00:05:27.936 size: 1.000366 MiB name: RG_ring_0_1076372 00:05:27.936 size: 1.000366 MiB name: RG_ring_1_1076372 00:05:27.936 size: 1.000366 MiB name: RG_ring_4_1076372 00:05:27.936 size: 1.000366 MiB name: RG_ring_5_1076372 00:05:27.936 size: 0.125366 MiB name: RG_ring_2_1076372 00:05:27.936 size: 0.015991 MiB name: RG_ring_3_1076372 00:05:27.936 end memzones------- 00:05:27.936 19:14:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.936 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:27.936 list of free elements. size: 12.519348 MiB 00:05:27.936 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:27.936 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:27.936 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:27.936 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:27.936 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:27.936 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:27.936 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:27.936 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:27.936 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:27.936 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:27.936 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:27.936 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:27.936 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:27.936 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:27.936 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:27.936 list of standard malloc elements. size: 199.218079 MiB 00:05:27.936 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:27.936 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:27.936 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:27.936 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:27.936 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:27.936 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:27.936 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:27.936 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:27.936 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:27.936 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:27.936 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:27.936 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:27.936 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:27.936 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:27.936 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:27.936 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:27.936 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:27.936 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:27.936 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:27.936 list of memzone associated elements. size: 602.262573 MiB 00:05:27.936 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:27.936 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.936 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:27.936 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.936 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:27.936 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1076372_0 00:05:27.936 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:27.936 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1076372_0 00:05:27.936 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:27.936 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1076372_0 00:05:27.936 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:27.936 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.936 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:27.936 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.936 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:27.936 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1076372 00:05:27.936 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:27.936 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1076372 00:05:27.936 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:27.936 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1076372 00:05:27.936 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:27.936 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.936 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:27.936 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.936 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:27.936 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.936 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:27.936 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.936 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:27.936 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1076372 00:05:27.936 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:27.936 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1076372 00:05:27.936 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:27.936 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1076372 00:05:27.936 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:27.936 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1076372 00:05:27.936 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:27.936 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1076372 00:05:27.936 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:27.936 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.936 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:27.936 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.936 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:27.936 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.937 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:27.937 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1076372 00:05:27.937 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:27.937 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.937 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:27.937 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.937 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:27.937 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1076372 00:05:27.937 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:27.937 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.937 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:27.937 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1076372 00:05:27.937 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:27.937 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1076372 00:05:27.937 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:27.937 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.937 19:14:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.937 19:14:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1076372 00:05:27.937 19:14:26 -- common/autotest_common.sh@936 -- # '[' -z 1076372 ']' 00:05:27.937 19:14:26 -- common/autotest_common.sh@940 -- # kill -0 1076372 00:05:27.937 19:14:26 -- common/autotest_common.sh@941 -- # uname 00:05:27.937 19:14:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.937 19:14:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1076372 00:05:27.937 19:14:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.937 19:14:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.937 19:14:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1076372' 00:05:27.937 killing process with pid 1076372 00:05:27.937 19:14:26 -- common/autotest_common.sh@955 -- # kill 1076372 00:05:27.937 19:14:26 -- common/autotest_common.sh@960 -- # wait 1076372 00:05:28.504 00:05:28.504 real 0m1.681s 00:05:28.504 user 0m1.861s 00:05:28.504 sys 0m0.448s 00:05:28.504 19:14:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.504 19:14:26 -- common/autotest_common.sh@10 -- # set +x 00:05:28.504 ************************************ 00:05:28.504 END TEST dpdk_mem_utility 00:05:28.504 ************************************ 00:05:28.504 19:14:26 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.504 19:14:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.504 19:14:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.504 19:14:26 -- common/autotest_common.sh@10 -- # set +x 00:05:28.504 ************************************ 00:05:28.504 START TEST event 00:05:28.504 ************************************ 00:05:28.504 19:14:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.504 * Looking for test storage... 00:05:28.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.504 19:14:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:28.504 19:14:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:28.504 19:14:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:28.504 19:14:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:28.504 19:14:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:28.504 19:14:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:28.504 19:14:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:28.504 19:14:26 -- scripts/common.sh@335 -- # IFS=.-: 00:05:28.504 19:14:26 -- scripts/common.sh@335 -- # read -ra ver1 00:05:28.504 19:14:26 -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.504 19:14:26 -- scripts/common.sh@336 -- # read -ra ver2 00:05:28.504 19:14:26 -- scripts/common.sh@337 -- # local 'op=<' 00:05:28.504 19:14:26 -- scripts/common.sh@339 -- # ver1_l=2 00:05:28.504 19:14:26 -- scripts/common.sh@340 -- # ver2_l=1 00:05:28.504 19:14:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:28.504 19:14:26 -- scripts/common.sh@343 -- # case "$op" in 00:05:28.504 19:14:26 -- scripts/common.sh@344 -- # : 1 00:05:28.504 19:14:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:28.504 19:14:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.504 19:14:26 -- scripts/common.sh@364 -- # decimal 1 00:05:28.504 19:14:26 -- scripts/common.sh@352 -- # local d=1 00:05:28.504 19:14:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.504 19:14:26 -- scripts/common.sh@354 -- # echo 1 00:05:28.504 19:14:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:28.504 19:14:26 -- scripts/common.sh@365 -- # decimal 2 00:05:28.504 19:14:26 -- scripts/common.sh@352 -- # local d=2 00:05:28.504 19:14:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.504 19:14:26 -- scripts/common.sh@354 -- # echo 2 00:05:28.504 19:14:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:28.504 19:14:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:28.504 19:14:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:28.504 19:14:26 -- scripts/common.sh@367 -- # return 0 00:05:28.504 19:14:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.504 19:14:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:28.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.504 --rc genhtml_branch_coverage=1 00:05:28.504 --rc genhtml_function_coverage=1 00:05:28.504 --rc genhtml_legend=1 00:05:28.504 --rc geninfo_all_blocks=1 00:05:28.504 --rc geninfo_unexecuted_blocks=1 00:05:28.504 00:05:28.504 ' 00:05:28.504 19:14:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:28.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.504 --rc genhtml_branch_coverage=1 00:05:28.504 --rc genhtml_function_coverage=1 00:05:28.504 --rc genhtml_legend=1 00:05:28.504 --rc geninfo_all_blocks=1 00:05:28.504 --rc geninfo_unexecuted_blocks=1 00:05:28.504 00:05:28.504 ' 00:05:28.504 19:14:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:28.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.504 --rc genhtml_branch_coverage=1 00:05:28.504 --rc genhtml_function_coverage=1 00:05:28.504 --rc genhtml_legend=1 00:05:28.504 --rc geninfo_all_blocks=1 00:05:28.504 --rc geninfo_unexecuted_blocks=1 00:05:28.504 00:05:28.504 ' 00:05:28.504 19:14:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:28.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.504 --rc genhtml_branch_coverage=1 00:05:28.504 --rc genhtml_function_coverage=1 00:05:28.504 --rc genhtml_legend=1 00:05:28.504 --rc geninfo_all_blocks=1 00:05:28.504 --rc geninfo_unexecuted_blocks=1 00:05:28.504 00:05:28.504 ' 00:05:28.504 19:14:26 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.504 19:14:26 -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.504 19:14:26 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.504 19:14:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:28.504 19:14:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.504 19:14:26 -- common/autotest_common.sh@10 -- # set +x 00:05:28.504 ************************************ 00:05:28.504 START TEST event_perf 00:05:28.504 ************************************ 00:05:28.504 19:14:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.504 Running I/O for 1 seconds...[2024-11-17 19:14:26.656402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:28.504 [2024-11-17 19:14:26.656486] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076582 ] 00:05:28.504 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.504 [2024-11-17 19:14:26.718065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.763 [2024-11-17 19:14:26.808908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.763 [2024-11-17 19:14:26.808966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.763 [2024-11-17 19:14:26.809033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.763 [2024-11-17 19:14:26.809035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.696 Running I/O for 1 seconds... 00:05:29.696 lcore 0: 228736 00:05:29.696 lcore 1: 228734 00:05:29.696 lcore 2: 228735 00:05:29.696 lcore 3: 228735 00:05:29.696 done. 00:05:29.696 00:05:29.696 real 0m1.235s 00:05:29.696 user 0m4.154s 00:05:29.696 sys 0m0.073s 00:05:29.696 19:14:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.696 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:05:29.696 ************************************ 00:05:29.696 END TEST event_perf 00:05:29.696 ************************************ 00:05:29.696 19:14:27 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.696 19:14:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:29.696 19:14:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.696 19:14:27 -- common/autotest_common.sh@10 -- # set +x 00:05:29.696 ************************************ 00:05:29.696 START TEST event_reactor 00:05:29.697 ************************************ 00:05:29.697 19:14:27 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.697 [2024-11-17 19:14:27.917501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:29.697 [2024-11-17 19:14:27.917584] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076743 ] 00:05:29.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.955 [2024-11-17 19:14:27.976739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.955 [2024-11-17 19:14:28.058465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.889 test_start 00:05:30.889 oneshot 00:05:30.889 tick 100 00:05:30.889 tick 100 00:05:30.889 tick 250 00:05:30.889 tick 100 00:05:30.889 tick 100 00:05:30.889 tick 100 00:05:30.889 tick 250 00:05:30.889 tick 500 00:05:30.889 tick 100 00:05:30.889 tick 100 00:05:30.889 tick 250 00:05:30.889 tick 100 00:05:30.889 tick 100 00:05:30.889 test_end 00:05:30.889 00:05:30.889 real 0m1.229s 00:05:30.889 user 0m1.144s 00:05:30.889 sys 0m0.080s 00:05:30.889 19:14:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.889 19:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:30.889 ************************************ 00:05:30.889 END TEST event_reactor 00:05:30.889 ************************************ 00:05:31.147 19:14:29 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.147 19:14:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:31.147 19:14:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.147 19:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.147 ************************************ 00:05:31.147 START TEST event_reactor_perf 00:05:31.147 ************************************ 00:05:31.147 19:14:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.147 [2024-11-17 19:14:29.176581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:31.147 [2024-11-17 19:14:29.176690] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076900 ] 00:05:31.147 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.148 [2024-11-17 19:14:29.235488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.148 [2024-11-17 19:14:29.319195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.522 test_start 00:05:32.522 test_end 00:05:32.522 Performance: 430375 events per second 00:05:32.522 00:05:32.522 real 0m1.235s 00:05:32.522 user 0m1.151s 00:05:32.522 sys 0m0.078s 00:05:32.522 19:14:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.522 19:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:32.522 ************************************ 00:05:32.522 END TEST event_reactor_perf 00:05:32.522 ************************************ 00:05:32.522 19:14:30 -- event/event.sh@49 -- # uname -s 00:05:32.522 19:14:30 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.522 19:14:30 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.522 19:14:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.522 19:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.522 19:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:32.522 ************************************ 00:05:32.522 START TEST event_scheduler 00:05:32.522 ************************************ 00:05:32.522 19:14:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.522 * Looking for test storage... 00:05:32.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:32.523 19:14:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:32.523 19:14:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:32.523 19:14:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:32.523 19:14:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:32.523 19:14:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:32.523 19:14:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:32.523 19:14:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:32.523 19:14:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:32.523 19:14:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:32.523 19:14:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.523 19:14:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:32.523 19:14:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:32.523 19:14:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:32.523 19:14:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:32.523 19:14:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:32.523 19:14:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:32.523 19:14:30 -- scripts/common.sh@344 -- # : 1 00:05:32.523 19:14:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:32.523 19:14:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.523 19:14:30 -- scripts/common.sh@364 -- # decimal 1 00:05:32.523 19:14:30 -- scripts/common.sh@352 -- # local d=1 00:05:32.523 19:14:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.523 19:14:30 -- scripts/common.sh@354 -- # echo 1 00:05:32.523 19:14:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:32.523 19:14:30 -- scripts/common.sh@365 -- # decimal 2 00:05:32.523 19:14:30 -- scripts/common.sh@352 -- # local d=2 00:05:32.523 19:14:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.523 19:14:30 -- scripts/common.sh@354 -- # echo 2 00:05:32.523 19:14:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:32.523 19:14:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:32.523 19:14:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:32.523 19:14:30 -- scripts/common.sh@367 -- # return 0 00:05:32.523 19:14:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.523 19:14:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:32.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.523 --rc genhtml_branch_coverage=1 00:05:32.523 --rc genhtml_function_coverage=1 00:05:32.523 --rc genhtml_legend=1 00:05:32.523 --rc geninfo_all_blocks=1 00:05:32.523 --rc geninfo_unexecuted_blocks=1 00:05:32.523 00:05:32.523 ' 00:05:32.523 19:14:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:32.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.523 --rc genhtml_branch_coverage=1 00:05:32.523 --rc genhtml_function_coverage=1 00:05:32.523 --rc genhtml_legend=1 00:05:32.523 --rc geninfo_all_blocks=1 00:05:32.523 --rc geninfo_unexecuted_blocks=1 00:05:32.523 00:05:32.523 ' 00:05:32.523 19:14:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:32.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.523 --rc genhtml_branch_coverage=1 00:05:32.523 --rc genhtml_function_coverage=1 00:05:32.523 --rc genhtml_legend=1 00:05:32.523 --rc geninfo_all_blocks=1 00:05:32.523 --rc geninfo_unexecuted_blocks=1 00:05:32.523 00:05:32.523 ' 00:05:32.523 19:14:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:32.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.523 --rc genhtml_branch_coverage=1 00:05:32.523 --rc genhtml_function_coverage=1 00:05:32.523 --rc genhtml_legend=1 00:05:32.523 --rc geninfo_all_blocks=1 00:05:32.523 --rc geninfo_unexecuted_blocks=1 00:05:32.523 00:05:32.523 ' 00:05:32.523 19:14:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.523 19:14:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1077206 00:05:32.523 19:14:30 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.523 19:14:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.523 19:14:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 1077206 00:05:32.523 19:14:30 -- common/autotest_common.sh@829 -- # '[' -z 1077206 ']' 00:05:32.523 19:14:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.523 19:14:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.523 19:14:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.523 19:14:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.523 19:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:32.523 [2024-11-17 19:14:30.620451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:32.523 [2024-11-17 19:14:30.620545] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077206 ] 00:05:32.523 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.523 [2024-11-17 19:14:30.678783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.523 [2024-11-17 19:14:30.764808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.523 [2024-11-17 19:14:30.764867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.523 [2024-11-17 19:14:30.764933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.523 [2024-11-17 19:14:30.764936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.783 19:14:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.783 19:14:30 -- common/autotest_common.sh@862 -- # return 0 00:05:32.783 19:14:30 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.783 19:14:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.783 19:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:32.783 POWER: Env isn't set yet! 00:05:32.783 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:32.783 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:32.783 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:32.783 POWER: Cannot get available frequencies of lcore 0 00:05:32.783 POWER: Attempting to initialise PSTAT power management... 00:05:32.783 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:32.783 POWER: Initialized successfully for lcore 0 power management 00:05:32.783 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:32.783 POWER: Initialized successfully for lcore 1 power management 00:05:32.783 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:32.783 POWER: Initialized successfully for lcore 2 power management 00:05:32.783 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:32.783 POWER: Initialized successfully for lcore 3 power management 00:05:32.783 [2024-11-17 19:14:30.892912] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.783 [2024-11-17 19:14:30.892933] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.783 [2024-11-17 19:14:30.892945] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.783 19:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.783 19:14:30 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.783 19:14:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.783 19:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:32.783 [2024-11-17 19:14:30.994901] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.783 19:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.783 19:14:30 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.783 19:14:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.783 19:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.783 19:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:32.783 ************************************ 00:05:32.783 START TEST scheduler_create_thread 00:05:32.783 ************************************ 00:05:32.783 19:14:30 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:32.783 19:14:31 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.783 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.783 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:32.783 2 00:05:32.783 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.783 19:14:31 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:32.783 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.783 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:32.783 3 00:05:32.783 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.783 19:14:31 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:32.783 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.783 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:32.783 4 00:05:32.783 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.783 19:14:31 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:32.783 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.783 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 5 00:05:33.042 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.042 19:14:31 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.042 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.042 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 6 00:05:33.042 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.042 19:14:31 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.042 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.042 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 7 00:05:33.042 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.042 19:14:31 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.042 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.042 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 8 00:05:33.042 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.042 19:14:31 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.042 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.042 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 9 00:05:33.042 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.042 19:14:31 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.042 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.042 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 10 00:05:33.042 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.042 19:14:31 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.042 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.042 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.042 19:14:31 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.042 19:14:31 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.042 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.042 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.042 19:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.042 19:14:31 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:33.042 19:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.042 19:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:34.414 19:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.414 19:14:32 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:34.414 19:14:32 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:34.414 19:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.414 19:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:35.785 19:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.785 00:05:35.785 real 0m2.620s 00:05:35.785 user 0m0.011s 00:05:35.785 sys 0m0.003s 00:05:35.785 19:14:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.785 19:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.785 ************************************ 00:05:35.785 END TEST scheduler_create_thread 00:05:35.785 ************************************ 00:05:35.785 19:14:33 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:35.785 19:14:33 -- scheduler/scheduler.sh@46 -- # killprocess 1077206 00:05:35.785 19:14:33 -- common/autotest_common.sh@936 -- # '[' -z 1077206 ']' 00:05:35.785 19:14:33 -- common/autotest_common.sh@940 -- # kill -0 1077206 00:05:35.785 19:14:33 -- common/autotest_common.sh@941 -- # uname 00:05:35.785 19:14:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.785 19:14:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1077206 00:05:35.785 19:14:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:35.785 19:14:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:35.785 19:14:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1077206' 00:05:35.786 killing process with pid 1077206 00:05:35.786 19:14:33 -- common/autotest_common.sh@955 -- # kill 1077206 00:05:35.786 19:14:33 -- common/autotest_common.sh@960 -- # wait 1077206 00:05:36.043 [2024-11-17 19:14:34.101921] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.043 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:36.043 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:36.043 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:36.043 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:36.043 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:36.043 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:36.043 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:36.043 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:36.302 00:05:36.302 real 0m3.893s 00:05:36.302 user 0m5.919s 00:05:36.302 sys 0m0.355s 00:05:36.302 19:14:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.302 19:14:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.302 ************************************ 00:05:36.302 END TEST event_scheduler 00:05:36.302 ************************************ 00:05:36.302 19:14:34 -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.302 19:14:34 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.302 19:14:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.302 19:14:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.302 19:14:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.302 ************************************ 00:05:36.302 START TEST app_repeat 00:05:36.302 ************************************ 00:05:36.302 19:14:34 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:36.302 19:14:34 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.302 19:14:34 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.302 19:14:34 -- event/event.sh@13 -- # local nbd_list 00:05:36.302 19:14:34 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.302 19:14:34 -- event/event.sh@14 -- # local bdev_list 00:05:36.302 19:14:34 -- event/event.sh@15 -- # local repeat_times=4 00:05:36.302 19:14:34 -- event/event.sh@17 -- # modprobe nbd 00:05:36.302 19:14:34 -- event/event.sh@19 -- # repeat_pid=1077673 00:05:36.302 19:14:34 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:36.302 19:14:34 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.302 19:14:34 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1077673' 00:05:36.302 Process app_repeat pid: 1077673 00:05:36.302 19:14:34 -- event/event.sh@23 -- # for i in {0..2} 00:05:36.302 19:14:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:36.302 spdk_app_start Round 0 00:05:36.302 19:14:34 -- event/event.sh@25 -- # waitforlisten 1077673 /var/tmp/spdk-nbd.sock 00:05:36.302 19:14:34 -- common/autotest_common.sh@829 -- # '[' -z 1077673 ']' 00:05:36.302 19:14:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.302 19:14:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.302 19:14:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.302 19:14:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.302 19:14:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.302 [2024-11-17 19:14:34.384120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:36.302 [2024-11-17 19:14:34.384210] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077673 ] 00:05:36.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.302 [2024-11-17 19:14:34.446895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.302 [2024-11-17 19:14:34.535413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.302 [2024-11-17 19:14:34.535414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.235 19:14:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.235 19:14:35 -- common/autotest_common.sh@862 -- # return 0 00:05:37.235 19:14:35 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.493 Malloc0 00:05:37.493 19:14:35 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.751 Malloc1 00:05:37.751 19:14:35 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@12 -- # local i 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.751 19:14:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.009 /dev/nbd0 00:05:38.009 19:14:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.009 19:14:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.009 19:14:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:38.009 19:14:36 -- common/autotest_common.sh@867 -- # local i 00:05:38.009 19:14:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.009 19:14:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.009 19:14:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:38.009 19:14:36 -- common/autotest_common.sh@871 -- # break 00:05:38.009 19:14:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.009 19:14:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.009 19:14:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.009 1+0 records in 00:05:38.009 1+0 records out 00:05:38.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229125 s, 17.9 MB/s 00:05:38.009 19:14:36 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.009 19:14:36 -- common/autotest_common.sh@884 -- # size=4096 00:05:38.009 19:14:36 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.009 19:14:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.009 19:14:36 -- common/autotest_common.sh@887 -- # return 0 00:05:38.009 19:14:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.009 19:14:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.009 19:14:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.267 /dev/nbd1 00:05:38.267 19:14:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.267 19:14:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.267 19:14:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:38.267 19:14:36 -- common/autotest_common.sh@867 -- # local i 00:05:38.267 19:14:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.267 19:14:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.267 19:14:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:38.267 19:14:36 -- common/autotest_common.sh@871 -- # break 00:05:38.267 19:14:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.267 19:14:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.267 19:14:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.267 1+0 records in 00:05:38.267 1+0 records out 00:05:38.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184299 s, 22.2 MB/s 00:05:38.267 19:14:36 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.267 19:14:36 -- common/autotest_common.sh@884 -- # size=4096 00:05:38.267 19:14:36 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.267 19:14:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.267 19:14:36 -- common/autotest_common.sh@887 -- # return 0 00:05:38.267 19:14:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.267 19:14:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.267 19:14:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.267 19:14:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.267 19:14:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.525 { 00:05:38.525 "nbd_device": "/dev/nbd0", 00:05:38.525 "bdev_name": "Malloc0" 00:05:38.525 }, 00:05:38.525 { 00:05:38.525 "nbd_device": "/dev/nbd1", 00:05:38.525 "bdev_name": "Malloc1" 00:05:38.525 } 00:05:38.525 ]' 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.525 { 00:05:38.525 "nbd_device": "/dev/nbd0", 00:05:38.525 "bdev_name": "Malloc0" 00:05:38.525 }, 00:05:38.525 { 00:05:38.525 "nbd_device": "/dev/nbd1", 00:05:38.525 "bdev_name": "Malloc1" 00:05:38.525 } 00:05:38.525 ]' 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.525 /dev/nbd1' 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.525 /dev/nbd1' 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.525 256+0 records in 00:05:38.525 256+0 records out 00:05:38.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515815 s, 203 MB/s 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.525 256+0 records in 00:05:38.525 256+0 records out 00:05:38.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232183 s, 45.2 MB/s 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.525 19:14:36 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.784 256+0 records in 00:05:38.784 256+0 records out 00:05:38.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023508 s, 44.6 MB/s 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@51 -- # local i 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.784 19:14:36 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@41 -- # break 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.042 19:14:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@41 -- # break 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.299 19:14:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@65 -- # true 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.557 19:14:37 -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.558 19:14:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.558 19:14:37 -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.558 19:14:37 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.816 19:14:37 -- event/event.sh@35 -- # sleep 3 00:05:40.075 [2024-11-17 19:14:38.181212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.075 [2024-11-17 19:14:38.269567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.075 [2024-11-17 19:14:38.269567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.075 [2024-11-17 19:14:38.330637] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.075 [2024-11-17 19:14:38.330739] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.355 19:14:40 -- event/event.sh@23 -- # for i in {0..2} 00:05:43.355 19:14:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.355 spdk_app_start Round 1 00:05:43.355 19:14:40 -- event/event.sh@25 -- # waitforlisten 1077673 /var/tmp/spdk-nbd.sock 00:05:43.355 19:14:40 -- common/autotest_common.sh@829 -- # '[' -z 1077673 ']' 00:05:43.355 19:14:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.355 19:14:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.355 19:14:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.355 19:14:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.355 19:14:40 -- common/autotest_common.sh@10 -- # set +x 00:05:43.355 19:14:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.355 19:14:41 -- common/autotest_common.sh@862 -- # return 0 00:05:43.355 19:14:41 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.355 Malloc0 00:05:43.355 19:14:41 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.613 Malloc1 00:05:43.613 19:14:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@12 -- # local i 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.613 19:14:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.871 /dev/nbd0 00:05:43.871 19:14:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.871 19:14:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.871 19:14:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:43.871 19:14:42 -- common/autotest_common.sh@867 -- # local i 00:05:43.871 19:14:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:43.871 19:14:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:43.871 19:14:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:43.871 19:14:42 -- common/autotest_common.sh@871 -- # break 00:05:43.871 19:14:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:43.871 19:14:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:43.871 19:14:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.871 1+0 records in 00:05:43.871 1+0 records out 00:05:43.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219843 s, 18.6 MB/s 00:05:43.871 19:14:42 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.871 19:14:42 -- common/autotest_common.sh@884 -- # size=4096 00:05:43.871 19:14:42 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.871 19:14:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:43.871 19:14:42 -- common/autotest_common.sh@887 -- # return 0 00:05:43.871 19:14:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.871 19:14:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.871 19:14:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.147 /dev/nbd1 00:05:44.147 19:14:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.147 19:14:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.147 19:14:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:44.147 19:14:42 -- common/autotest_common.sh@867 -- # local i 00:05:44.147 19:14:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.147 19:14:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.147 19:14:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:44.147 19:14:42 -- common/autotest_common.sh@871 -- # break 00:05:44.147 19:14:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.147 19:14:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.147 19:14:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.147 1+0 records in 00:05:44.147 1+0 records out 00:05:44.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186169 s, 22.0 MB/s 00:05:44.147 19:14:42 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.147 19:14:42 -- common/autotest_common.sh@884 -- # size=4096 00:05:44.147 19:14:42 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.147 19:14:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.147 19:14:42 -- common/autotest_common.sh@887 -- # return 0 00:05:44.147 19:14:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.147 19:14:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.147 19:14:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.147 19:14:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.147 19:14:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.439 { 00:05:44.439 "nbd_device": "/dev/nbd0", 00:05:44.439 "bdev_name": "Malloc0" 00:05:44.439 }, 00:05:44.439 { 00:05:44.439 "nbd_device": "/dev/nbd1", 00:05:44.439 "bdev_name": "Malloc1" 00:05:44.439 } 00:05:44.439 ]' 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.439 { 00:05:44.439 "nbd_device": "/dev/nbd0", 00:05:44.439 "bdev_name": "Malloc0" 00:05:44.439 }, 00:05:44.439 { 00:05:44.439 "nbd_device": "/dev/nbd1", 00:05:44.439 "bdev_name": "Malloc1" 00:05:44.439 } 00:05:44.439 ]' 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.439 /dev/nbd1' 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.439 /dev/nbd1' 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.439 256+0 records in 00:05:44.439 256+0 records out 00:05:44.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00538524 s, 195 MB/s 00:05:44.439 19:14:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.440 256+0 records in 00:05:44.440 256+0 records out 00:05:44.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228317 s, 45.9 MB/s 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.440 256+0 records in 00:05:44.440 256+0 records out 00:05:44.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239648 s, 43.8 MB/s 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@51 -- # local i 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.440 19:14:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@41 -- # break 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.011 19:14:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@41 -- # break 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.011 19:14:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.269 19:14:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.269 19:14:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.269 19:14:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.527 19:14:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.527 19:14:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.527 19:14:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.527 19:14:43 -- bdev/nbd_common.sh@65 -- # true 00:05:45.527 19:14:43 -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.527 19:14:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.527 19:14:43 -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.527 19:14:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.527 19:14:43 -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.527 19:14:43 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.785 19:14:43 -- event/event.sh@35 -- # sleep 3 00:05:45.785 [2024-11-17 19:14:44.051159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.043 [2024-11-17 19:14:44.138627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.043 [2024-11-17 19:14:44.138630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.043 [2024-11-17 19:14:44.200272] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.043 [2024-11-17 19:14:44.200347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.570 19:14:46 -- event/event.sh@23 -- # for i in {0..2} 00:05:48.570 19:14:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.570 spdk_app_start Round 2 00:05:48.570 19:14:46 -- event/event.sh@25 -- # waitforlisten 1077673 /var/tmp/spdk-nbd.sock 00:05:48.570 19:14:46 -- common/autotest_common.sh@829 -- # '[' -z 1077673 ']' 00:05:48.570 19:14:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.570 19:14:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.570 19:14:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.570 19:14:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.570 19:14:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.827 19:14:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.827 19:14:47 -- common/autotest_common.sh@862 -- # return 0 00:05:48.827 19:14:47 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.086 Malloc0 00:05:49.086 19:14:47 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.653 Malloc1 00:05:49.653 19:14:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@12 -- # local i 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.653 /dev/nbd0 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.653 19:14:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:49.653 19:14:47 -- common/autotest_common.sh@867 -- # local i 00:05:49.653 19:14:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.653 19:14:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.653 19:14:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:49.653 19:14:47 -- common/autotest_common.sh@871 -- # break 00:05:49.653 19:14:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.653 19:14:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.653 19:14:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.653 1+0 records in 00:05:49.653 1+0 records out 00:05:49.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191158 s, 21.4 MB/s 00:05:49.653 19:14:47 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.653 19:14:47 -- common/autotest_common.sh@884 -- # size=4096 00:05:49.653 19:14:47 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.653 19:14:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.653 19:14:47 -- common/autotest_common.sh@887 -- # return 0 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.653 19:14:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.912 /dev/nbd1 00:05:50.169 19:14:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.169 19:14:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.169 19:14:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:50.169 19:14:48 -- common/autotest_common.sh@867 -- # local i 00:05:50.169 19:14:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.169 19:14:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.169 19:14:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:50.169 19:14:48 -- common/autotest_common.sh@871 -- # break 00:05:50.169 19:14:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.169 19:14:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.170 19:14:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.170 1+0 records in 00:05:50.170 1+0 records out 00:05:50.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024206 s, 16.9 MB/s 00:05:50.170 19:14:48 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.170 19:14:48 -- common/autotest_common.sh@884 -- # size=4096 00:05:50.170 19:14:48 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.170 19:14:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.170 19:14:48 -- common/autotest_common.sh@887 -- # return 0 00:05:50.170 19:14:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.170 19:14:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.170 19:14:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.170 19:14:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.170 19:14:48 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.427 { 00:05:50.427 "nbd_device": "/dev/nbd0", 00:05:50.427 "bdev_name": "Malloc0" 00:05:50.427 }, 00:05:50.427 { 00:05:50.427 "nbd_device": "/dev/nbd1", 00:05:50.427 "bdev_name": "Malloc1" 00:05:50.427 } 00:05:50.427 ]' 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.427 { 00:05:50.427 "nbd_device": "/dev/nbd0", 00:05:50.427 "bdev_name": "Malloc0" 00:05:50.427 }, 00:05:50.427 { 00:05:50.427 "nbd_device": "/dev/nbd1", 00:05:50.427 "bdev_name": "Malloc1" 00:05:50.427 } 00:05:50.427 ]' 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.427 /dev/nbd1' 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.427 /dev/nbd1' 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.427 19:14:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.428 256+0 records in 00:05:50.428 256+0 records out 00:05:50.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421112 s, 249 MB/s 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.428 256+0 records in 00:05:50.428 256+0 records out 00:05:50.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230863 s, 45.4 MB/s 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.428 256+0 records in 00:05:50.428 256+0 records out 00:05:50.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250295 s, 41.9 MB/s 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@51 -- # local i 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.428 19:14:48 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@41 -- # break 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.686 19:14:48 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@41 -- # break 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.944 19:14:49 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@65 -- # true 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.202 19:14:49 -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.202 19:14:49 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.460 19:14:49 -- event/event.sh@35 -- # sleep 3 00:05:51.717 [2024-11-17 19:14:49.933065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.975 [2024-11-17 19:14:50.027684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.975 [2024-11-17 19:14:50.027688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.975 [2024-11-17 19:14:50.090636] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.975 [2024-11-17 19:14:50.090733] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.517 19:14:52 -- event/event.sh@38 -- # waitforlisten 1077673 /var/tmp/spdk-nbd.sock 00:05:54.517 19:14:52 -- common/autotest_common.sh@829 -- # '[' -z 1077673 ']' 00:05:54.517 19:14:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.517 19:14:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.517 19:14:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.517 19:14:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.517 19:14:52 -- common/autotest_common.sh@10 -- # set +x 00:05:54.775 19:14:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.775 19:14:52 -- common/autotest_common.sh@862 -- # return 0 00:05:54.775 19:14:52 -- event/event.sh@39 -- # killprocess 1077673 00:05:54.775 19:14:52 -- common/autotest_common.sh@936 -- # '[' -z 1077673 ']' 00:05:54.775 19:14:52 -- common/autotest_common.sh@940 -- # kill -0 1077673 00:05:54.775 19:14:52 -- common/autotest_common.sh@941 -- # uname 00:05:54.775 19:14:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.775 19:14:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1077673 00:05:54.775 19:14:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.775 19:14:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.775 19:14:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1077673' 00:05:54.775 killing process with pid 1077673 00:05:54.775 19:14:52 -- common/autotest_common.sh@955 -- # kill 1077673 00:05:54.775 19:14:52 -- common/autotest_common.sh@960 -- # wait 1077673 00:05:55.035 spdk_app_start is called in Round 0. 00:05:55.035 Shutdown signal received, stop current app iteration 00:05:55.035 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:55.035 spdk_app_start is called in Round 1. 00:05:55.035 Shutdown signal received, stop current app iteration 00:05:55.035 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:55.035 spdk_app_start is called in Round 2. 00:05:55.035 Shutdown signal received, stop current app iteration 00:05:55.035 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:55.035 spdk_app_start is called in Round 3. 00:05:55.035 Shutdown signal received, stop current app iteration 00:05:55.035 19:14:53 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:55.035 19:14:53 -- event/event.sh@42 -- # return 0 00:05:55.035 00:05:55.035 real 0m18.835s 00:05:55.035 user 0m41.339s 00:05:55.035 sys 0m3.163s 00:05:55.035 19:14:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.035 19:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.035 ************************************ 00:05:55.035 END TEST app_repeat 00:05:55.035 ************************************ 00:05:55.035 19:14:53 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:55.035 19:14:53 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.035 19:14:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.035 19:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.035 19:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.035 ************************************ 00:05:55.035 START TEST cpu_locks 00:05:55.035 ************************************ 00:05:55.035 19:14:53 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.035 * Looking for test storage... 00:05:55.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.035 19:14:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:55.035 19:14:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:55.035 19:14:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:55.294 19:14:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:55.294 19:14:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:55.294 19:14:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.294 19:14:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.294 19:14:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.294 19:14:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.294 19:14:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.294 19:14:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.294 19:14:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.294 19:14:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.294 19:14:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.294 19:14:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.294 19:14:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.294 19:14:53 -- scripts/common.sh@344 -- # : 1 00:05:55.294 19:14:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.294 19:14:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.294 19:14:53 -- scripts/common.sh@364 -- # decimal 1 00:05:55.294 19:14:53 -- scripts/common.sh@352 -- # local d=1 00:05:55.294 19:14:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.294 19:14:53 -- scripts/common.sh@354 -- # echo 1 00:05:55.294 19:14:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.294 19:14:53 -- scripts/common.sh@365 -- # decimal 2 00:05:55.294 19:14:53 -- scripts/common.sh@352 -- # local d=2 00:05:55.294 19:14:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.294 19:14:53 -- scripts/common.sh@354 -- # echo 2 00:05:55.294 19:14:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.294 19:14:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.294 19:14:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.294 19:14:53 -- scripts/common.sh@367 -- # return 0 00:05:55.294 19:14:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.294 19:14:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.294 --rc genhtml_branch_coverage=1 00:05:55.294 --rc genhtml_function_coverage=1 00:05:55.294 --rc genhtml_legend=1 00:05:55.294 --rc geninfo_all_blocks=1 00:05:55.294 --rc geninfo_unexecuted_blocks=1 00:05:55.294 00:05:55.294 ' 00:05:55.294 19:14:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.294 --rc genhtml_branch_coverage=1 00:05:55.294 --rc genhtml_function_coverage=1 00:05:55.294 --rc genhtml_legend=1 00:05:55.294 --rc geninfo_all_blocks=1 00:05:55.294 --rc geninfo_unexecuted_blocks=1 00:05:55.294 00:05:55.294 ' 00:05:55.294 19:14:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.294 --rc genhtml_branch_coverage=1 00:05:55.294 --rc genhtml_function_coverage=1 00:05:55.294 --rc genhtml_legend=1 00:05:55.294 --rc geninfo_all_blocks=1 00:05:55.294 --rc geninfo_unexecuted_blocks=1 00:05:55.294 00:05:55.294 ' 00:05:55.294 19:14:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.294 --rc genhtml_branch_coverage=1 00:05:55.294 --rc genhtml_function_coverage=1 00:05:55.294 --rc genhtml_legend=1 00:05:55.294 --rc geninfo_all_blocks=1 00:05:55.294 --rc geninfo_unexecuted_blocks=1 00:05:55.294 00:05:55.294 ' 00:05:55.294 19:14:53 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:55.294 19:14:53 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:55.294 19:14:53 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:55.294 19:14:53 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:55.294 19:14:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.294 19:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.294 19:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.294 ************************************ 00:05:55.294 START TEST default_locks 00:05:55.294 ************************************ 00:05:55.294 19:14:53 -- common/autotest_common.sh@1114 -- # default_locks 00:05:55.294 19:14:53 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1080242 00:05:55.294 19:14:53 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.294 19:14:53 -- event/cpu_locks.sh@47 -- # waitforlisten 1080242 00:05:55.294 19:14:53 -- common/autotest_common.sh@829 -- # '[' -z 1080242 ']' 00:05:55.294 19:14:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.294 19:14:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.294 19:14:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.294 19:14:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.294 19:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.294 [2024-11-17 19:14:53.420760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:55.294 [2024-11-17 19:14:53.420840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080242 ] 00:05:55.294 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.294 [2024-11-17 19:14:53.486427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.553 [2024-11-17 19:14:53.584296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.553 [2024-11-17 19:14:53.584461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.486 19:14:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.486 19:14:54 -- common/autotest_common.sh@862 -- # return 0 00:05:56.486 19:14:54 -- event/cpu_locks.sh@49 -- # locks_exist 1080242 00:05:56.486 19:14:54 -- event/cpu_locks.sh@22 -- # lslocks -p 1080242 00:05:56.486 19:14:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.486 lslocks: write error 00:05:56.487 19:14:54 -- event/cpu_locks.sh@50 -- # killprocess 1080242 00:05:56.487 19:14:54 -- common/autotest_common.sh@936 -- # '[' -z 1080242 ']' 00:05:56.487 19:14:54 -- common/autotest_common.sh@940 -- # kill -0 1080242 00:05:56.487 19:14:54 -- common/autotest_common.sh@941 -- # uname 00:05:56.487 19:14:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.487 19:14:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1080242 00:05:56.487 19:14:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.487 19:14:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.487 19:14:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1080242' 00:05:56.487 killing process with pid 1080242 00:05:56.487 19:14:54 -- common/autotest_common.sh@955 -- # kill 1080242 00:05:56.487 19:14:54 -- common/autotest_common.sh@960 -- # wait 1080242 00:05:57.053 19:14:55 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1080242 00:05:57.053 19:14:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.053 19:14:55 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1080242 00:05:57.053 19:14:55 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:57.053 19:14:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.053 19:14:55 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:57.053 19:14:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.053 19:14:55 -- common/autotest_common.sh@653 -- # waitforlisten 1080242 00:05:57.053 19:14:55 -- common/autotest_common.sh@829 -- # '[' -z 1080242 ']' 00:05:57.053 19:14:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.053 19:14:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.053 19:14:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.053 19:14:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.053 19:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:57.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1080242) - No such process 00:05:57.053 ERROR: process (pid: 1080242) is no longer running 00:05:57.053 19:14:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.053 19:14:55 -- common/autotest_common.sh@862 -- # return 1 00:05:57.053 19:14:55 -- common/autotest_common.sh@653 -- # es=1 00:05:57.053 19:14:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.053 19:14:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.053 19:14:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.053 19:14:55 -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.053 19:14:55 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.053 19:14:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.053 19:14:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.053 00:05:57.053 real 0m1.692s 00:05:57.053 user 0m1.874s 00:05:57.053 sys 0m0.541s 00:05:57.053 19:14:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.053 19:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:57.053 ************************************ 00:05:57.053 END TEST default_locks 00:05:57.053 ************************************ 00:05:57.053 19:14:55 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.053 19:14:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.053 19:14:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.053 19:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:57.053 ************************************ 00:05:57.053 START TEST default_locks_via_rpc 00:05:57.053 ************************************ 00:05:57.053 19:14:55 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:57.053 19:14:55 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1080423 00:05:57.053 19:14:55 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.053 19:14:55 -- event/cpu_locks.sh@63 -- # waitforlisten 1080423 00:05:57.053 19:14:55 -- common/autotest_common.sh@829 -- # '[' -z 1080423 ']' 00:05:57.053 19:14:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.053 19:14:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.053 19:14:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.053 19:14:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.053 19:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:57.053 [2024-11-17 19:14:55.135262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:57.053 [2024-11-17 19:14:55.135351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080423 ] 00:05:57.053 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.053 [2024-11-17 19:14:55.192368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.053 [2024-11-17 19:14:55.281924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.053 [2024-11-17 19:14:55.282082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.988 19:14:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.988 19:14:56 -- common/autotest_common.sh@862 -- # return 0 00:05:57.988 19:14:56 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.988 19:14:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.988 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:57.988 19:14:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.988 19:14:56 -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.988 19:14:56 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.988 19:14:56 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.988 19:14:56 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.988 19:14:56 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.988 19:14:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.988 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:57.988 19:14:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.988 19:14:56 -- event/cpu_locks.sh@71 -- # locks_exist 1080423 00:05:57.988 19:14:56 -- event/cpu_locks.sh@22 -- # lslocks -p 1080423 00:05:57.988 19:14:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.246 19:14:56 -- event/cpu_locks.sh@73 -- # killprocess 1080423 00:05:58.246 19:14:56 -- common/autotest_common.sh@936 -- # '[' -z 1080423 ']' 00:05:58.246 19:14:56 -- common/autotest_common.sh@940 -- # kill -0 1080423 00:05:58.246 19:14:56 -- common/autotest_common.sh@941 -- # uname 00:05:58.246 19:14:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.246 19:14:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1080423 00:05:58.246 19:14:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.246 19:14:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.246 19:14:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1080423' 00:05:58.246 killing process with pid 1080423 00:05:58.246 19:14:56 -- common/autotest_common.sh@955 -- # kill 1080423 00:05:58.246 19:14:56 -- common/autotest_common.sh@960 -- # wait 1080423 00:05:58.812 00:05:58.812 real 0m1.780s 00:05:58.812 user 0m1.956s 00:05:58.812 sys 0m0.544s 00:05:58.812 19:14:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.813 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:58.813 ************************************ 00:05:58.813 END TEST default_locks_via_rpc 00:05:58.813 ************************************ 00:05:58.813 19:14:56 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:58.813 19:14:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.813 19:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.813 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:58.813 ************************************ 00:05:58.813 START TEST non_locking_app_on_locked_coremask 00:05:58.813 ************************************ 00:05:58.813 19:14:56 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:58.813 19:14:56 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1080708 00:05:58.813 19:14:56 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.813 19:14:56 -- event/cpu_locks.sh@81 -- # waitforlisten 1080708 /var/tmp/spdk.sock 00:05:58.813 19:14:56 -- common/autotest_common.sh@829 -- # '[' -z 1080708 ']' 00:05:58.813 19:14:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.813 19:14:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.813 19:14:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.813 19:14:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.813 19:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:58.813 [2024-11-17 19:14:56.941516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:58.813 [2024-11-17 19:14:56.941601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080708 ] 00:05:58.813 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.813 [2024-11-17 19:14:57.005704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.071 [2024-11-17 19:14:57.098497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.071 [2024-11-17 19:14:57.098636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.006 19:14:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.006 19:14:57 -- common/autotest_common.sh@862 -- # return 0 00:06:00.006 19:14:57 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1080848 00:06:00.006 19:14:57 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:00.006 19:14:57 -- event/cpu_locks.sh@85 -- # waitforlisten 1080848 /var/tmp/spdk2.sock 00:06:00.006 19:14:57 -- common/autotest_common.sh@829 -- # '[' -z 1080848 ']' 00:06:00.006 19:14:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.006 19:14:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.006 19:14:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.006 19:14:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.006 19:14:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.006 [2024-11-17 19:14:57.955491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:00.006 [2024-11-17 19:14:57.955571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080848 ] 00:06:00.006 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.006 [2024-11-17 19:14:58.046231] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.006 [2024-11-17 19:14:58.046265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.006 [2024-11-17 19:14:58.229496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.006 [2024-11-17 19:14:58.229637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.942 19:14:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.942 19:14:58 -- common/autotest_common.sh@862 -- # return 0 00:06:00.942 19:14:58 -- event/cpu_locks.sh@87 -- # locks_exist 1080708 00:06:00.942 19:14:58 -- event/cpu_locks.sh@22 -- # lslocks -p 1080708 00:06:00.942 19:14:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.200 lslocks: write error 00:06:01.200 19:14:59 -- event/cpu_locks.sh@89 -- # killprocess 1080708 00:06:01.200 19:14:59 -- common/autotest_common.sh@936 -- # '[' -z 1080708 ']' 00:06:01.200 19:14:59 -- common/autotest_common.sh@940 -- # kill -0 1080708 00:06:01.200 19:14:59 -- common/autotest_common.sh@941 -- # uname 00:06:01.200 19:14:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.200 19:14:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1080708 00:06:01.200 19:14:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.200 19:14:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.200 19:14:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1080708' 00:06:01.200 killing process with pid 1080708 00:06:01.200 19:14:59 -- common/autotest_common.sh@955 -- # kill 1080708 00:06:01.200 19:14:59 -- common/autotest_common.sh@960 -- # wait 1080708 00:06:02.133 19:15:00 -- event/cpu_locks.sh@90 -- # killprocess 1080848 00:06:02.133 19:15:00 -- common/autotest_common.sh@936 -- # '[' -z 1080848 ']' 00:06:02.133 19:15:00 -- common/autotest_common.sh@940 -- # kill -0 1080848 00:06:02.133 19:15:00 -- common/autotest_common.sh@941 -- # uname 00:06:02.134 19:15:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.134 19:15:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1080848 00:06:02.134 19:15:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.134 19:15:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.134 19:15:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1080848' 00:06:02.134 killing process with pid 1080848 00:06:02.134 19:15:00 -- common/autotest_common.sh@955 -- # kill 1080848 00:06:02.134 19:15:00 -- common/autotest_common.sh@960 -- # wait 1080848 00:06:02.392 00:06:02.392 real 0m3.644s 00:06:02.392 user 0m4.005s 00:06:02.392 sys 0m1.062s 00:06:02.392 19:15:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.392 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 ************************************ 00:06:02.392 END TEST non_locking_app_on_locked_coremask 00:06:02.392 ************************************ 00:06:02.392 19:15:00 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:02.392 19:15:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.392 19:15:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.392 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 ************************************ 00:06:02.392 START TEST locking_app_on_unlocked_coremask 00:06:02.392 ************************************ 00:06:02.392 19:15:00 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:02.392 19:15:00 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1081211 00:06:02.392 19:15:00 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:02.392 19:15:00 -- event/cpu_locks.sh@99 -- # waitforlisten 1081211 /var/tmp/spdk.sock 00:06:02.392 19:15:00 -- common/autotest_common.sh@829 -- # '[' -z 1081211 ']' 00:06:02.392 19:15:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.392 19:15:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.392 19:15:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.392 19:15:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.392 19:15:00 -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 [2024-11-17 19:15:00.617540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:02.392 [2024-11-17 19:15:00.617634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081211 ] 00:06:02.392 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.650 [2024-11-17 19:15:00.684403] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.650 [2024-11-17 19:15:00.684445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.650 [2024-11-17 19:15:00.778797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.650 [2024-11-17 19:15:00.778955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.585 19:15:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.585 19:15:01 -- common/autotest_common.sh@862 -- # return 0 00:06:03.585 19:15:01 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1081404 00:06:03.585 19:15:01 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.585 19:15:01 -- event/cpu_locks.sh@103 -- # waitforlisten 1081404 /var/tmp/spdk2.sock 00:06:03.585 19:15:01 -- common/autotest_common.sh@829 -- # '[' -z 1081404 ']' 00:06:03.585 19:15:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.585 19:15:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.585 19:15:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.585 19:15:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.585 19:15:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.585 [2024-11-17 19:15:01.673013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:03.585 [2024-11-17 19:15:01.673120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081404 ] 00:06:03.585 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.585 [2024-11-17 19:15:01.764115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.843 [2024-11-17 19:15:01.940601] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.843 [2024-11-17 19:15:01.944794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.409 19:15:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.409 19:15:02 -- common/autotest_common.sh@862 -- # return 0 00:06:04.409 19:15:02 -- event/cpu_locks.sh@105 -- # locks_exist 1081404 00:06:04.409 19:15:02 -- event/cpu_locks.sh@22 -- # lslocks -p 1081404 00:06:04.410 19:15:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.976 lslocks: write error 00:06:04.976 19:15:02 -- event/cpu_locks.sh@107 -- # killprocess 1081211 00:06:04.976 19:15:02 -- common/autotest_common.sh@936 -- # '[' -z 1081211 ']' 00:06:04.976 19:15:02 -- common/autotest_common.sh@940 -- # kill -0 1081211 00:06:04.976 19:15:02 -- common/autotest_common.sh@941 -- # uname 00:06:04.976 19:15:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.976 19:15:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1081211 00:06:04.976 19:15:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.976 19:15:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.976 19:15:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1081211' 00:06:04.976 killing process with pid 1081211 00:06:04.976 19:15:03 -- common/autotest_common.sh@955 -- # kill 1081211 00:06:04.976 19:15:03 -- common/autotest_common.sh@960 -- # wait 1081211 00:06:05.910 19:15:03 -- event/cpu_locks.sh@108 -- # killprocess 1081404 00:06:05.910 19:15:03 -- common/autotest_common.sh@936 -- # '[' -z 1081404 ']' 00:06:05.910 19:15:03 -- common/autotest_common.sh@940 -- # kill -0 1081404 00:06:05.910 19:15:03 -- common/autotest_common.sh@941 -- # uname 00:06:05.910 19:15:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.910 19:15:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1081404 00:06:05.910 19:15:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.910 19:15:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.910 19:15:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1081404' 00:06:05.910 killing process with pid 1081404 00:06:05.910 19:15:03 -- common/autotest_common.sh@955 -- # kill 1081404 00:06:05.910 19:15:03 -- common/autotest_common.sh@960 -- # wait 1081404 00:06:06.168 00:06:06.168 real 0m3.721s 00:06:06.168 user 0m4.123s 00:06:06.168 sys 0m1.090s 00:06:06.168 19:15:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.168 19:15:04 -- common/autotest_common.sh@10 -- # set +x 00:06:06.168 ************************************ 00:06:06.168 END TEST locking_app_on_unlocked_coremask 00:06:06.168 ************************************ 00:06:06.168 19:15:04 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:06.168 19:15:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.168 19:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.168 19:15:04 -- common/autotest_common.sh@10 -- # set +x 00:06:06.168 ************************************ 00:06:06.168 START TEST locking_app_on_locked_coremask 00:06:06.168 ************************************ 00:06:06.168 19:15:04 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:06.168 19:15:04 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1081737 00:06:06.168 19:15:04 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.168 19:15:04 -- event/cpu_locks.sh@116 -- # waitforlisten 1081737 /var/tmp/spdk.sock 00:06:06.168 19:15:04 -- common/autotest_common.sh@829 -- # '[' -z 1081737 ']' 00:06:06.168 19:15:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.168 19:15:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.168 19:15:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.168 19:15:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.168 19:15:04 -- common/autotest_common.sh@10 -- # set +x 00:06:06.168 [2024-11-17 19:15:04.364760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:06.168 [2024-11-17 19:15:04.364847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081737 ] 00:06:06.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.168 [2024-11-17 19:15:04.428844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.427 [2024-11-17 19:15:04.518482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.427 [2024-11-17 19:15:04.518689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.360 19:15:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.360 19:15:05 -- common/autotest_common.sh@862 -- # return 0 00:06:07.360 19:15:05 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1081876 00:06:07.360 19:15:05 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1081876 /var/tmp/spdk2.sock 00:06:07.360 19:15:05 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.360 19:15:05 -- common/autotest_common.sh@650 -- # local es=0 00:06:07.360 19:15:05 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1081876 /var/tmp/spdk2.sock 00:06:07.360 19:15:05 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:07.360 19:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.360 19:15:05 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:07.360 19:15:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.360 19:15:05 -- common/autotest_common.sh@653 -- # waitforlisten 1081876 /var/tmp/spdk2.sock 00:06:07.360 19:15:05 -- common/autotest_common.sh@829 -- # '[' -z 1081876 ']' 00:06:07.360 19:15:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.360 19:15:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.360 19:15:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.361 19:15:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.361 19:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.361 [2024-11-17 19:15:05.380529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:07.361 [2024-11-17 19:15:05.380614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081876 ] 00:06:07.361 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.361 [2024-11-17 19:15:05.475131] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1081737 has claimed it. 00:06:07.361 [2024-11-17 19:15:05.475199] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1081876) - No such process 00:06:07.926 ERROR: process (pid: 1081876) is no longer running 00:06:07.926 19:15:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.926 19:15:06 -- common/autotest_common.sh@862 -- # return 1 00:06:07.926 19:15:06 -- common/autotest_common.sh@653 -- # es=1 00:06:07.926 19:15:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.926 19:15:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.926 19:15:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.926 19:15:06 -- event/cpu_locks.sh@122 -- # locks_exist 1081737 00:06:07.926 19:15:06 -- event/cpu_locks.sh@22 -- # lslocks -p 1081737 00:06:07.926 19:15:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.185 lslocks: write error 00:06:08.185 19:15:06 -- event/cpu_locks.sh@124 -- # killprocess 1081737 00:06:08.185 19:15:06 -- common/autotest_common.sh@936 -- # '[' -z 1081737 ']' 00:06:08.185 19:15:06 -- common/autotest_common.sh@940 -- # kill -0 1081737 00:06:08.185 19:15:06 -- common/autotest_common.sh@941 -- # uname 00:06:08.185 19:15:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.185 19:15:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1081737 00:06:08.185 19:15:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.185 19:15:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.185 19:15:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1081737' 00:06:08.185 killing process with pid 1081737 00:06:08.185 19:15:06 -- common/autotest_common.sh@955 -- # kill 1081737 00:06:08.185 19:15:06 -- common/autotest_common.sh@960 -- # wait 1081737 00:06:08.751 00:06:08.751 real 0m2.503s 00:06:08.751 user 0m2.869s 00:06:08.751 sys 0m0.692s 00:06:08.751 19:15:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.751 19:15:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.751 ************************************ 00:06:08.751 END TEST locking_app_on_locked_coremask 00:06:08.751 ************************************ 00:06:08.751 19:15:06 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.751 19:15:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.751 19:15:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.751 19:15:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.751 ************************************ 00:06:08.751 START TEST locking_overlapped_coremask 00:06:08.751 ************************************ 00:06:08.751 19:15:06 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:08.751 19:15:06 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1082162 00:06:08.751 19:15:06 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.751 19:15:06 -- event/cpu_locks.sh@133 -- # waitforlisten 1082162 /var/tmp/spdk.sock 00:06:08.751 19:15:06 -- common/autotest_common.sh@829 -- # '[' -z 1082162 ']' 00:06:08.751 19:15:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.751 19:15:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.751 19:15:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.751 19:15:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.751 19:15:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.751 [2024-11-17 19:15:06.900019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:08.751 [2024-11-17 19:15:06.900098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082162 ] 00:06:08.751 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.751 [2024-11-17 19:15:06.957475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.010 [2024-11-17 19:15:07.048403] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.010 [2024-11-17 19:15:07.048608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.010 [2024-11-17 19:15:07.048681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.010 [2024-11-17 19:15:07.048684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.943 19:15:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.943 19:15:07 -- common/autotest_common.sh@862 -- # return 0 00:06:09.943 19:15:07 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1082607 00:06:09.943 19:15:07 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1082607 /var/tmp/spdk2.sock 00:06:09.943 19:15:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:09.943 19:15:07 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:09.943 19:15:07 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1082607 /var/tmp/spdk2.sock 00:06:09.943 19:15:07 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.943 19:15:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.943 19:15:07 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.943 19:15:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.943 19:15:07 -- common/autotest_common.sh@653 -- # waitforlisten 1082607 /var/tmp/spdk2.sock 00:06:09.943 19:15:07 -- common/autotest_common.sh@829 -- # '[' -z 1082607 ']' 00:06:09.943 19:15:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.943 19:15:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.943 19:15:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.943 19:15:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.943 19:15:07 -- common/autotest_common.sh@10 -- # set +x 00:06:09.943 [2024-11-17 19:15:07.935491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:09.943 [2024-11-17 19:15:07.935593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082607 ] 00:06:09.943 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.944 [2024-11-17 19:15:08.023748] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1082162 has claimed it. 00:06:09.944 [2024-11-17 19:15:08.023799] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1082607) - No such process 00:06:10.510 ERROR: process (pid: 1082607) is no longer running 00:06:10.510 19:15:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.510 19:15:08 -- common/autotest_common.sh@862 -- # return 1 00:06:10.510 19:15:08 -- common/autotest_common.sh@653 -- # es=1 00:06:10.510 19:15:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.510 19:15:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.510 19:15:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.510 19:15:08 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.510 19:15:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.510 19:15:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.510 19:15:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.510 19:15:08 -- event/cpu_locks.sh@141 -- # killprocess 1082162 00:06:10.510 19:15:08 -- common/autotest_common.sh@936 -- # '[' -z 1082162 ']' 00:06:10.510 19:15:08 -- common/autotest_common.sh@940 -- # kill -0 1082162 00:06:10.510 19:15:08 -- common/autotest_common.sh@941 -- # uname 00:06:10.510 19:15:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.510 19:15:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1082162 00:06:10.510 19:15:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.510 19:15:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.510 19:15:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1082162' 00:06:10.510 killing process with pid 1082162 00:06:10.510 19:15:08 -- common/autotest_common.sh@955 -- # kill 1082162 00:06:10.510 19:15:08 -- common/autotest_common.sh@960 -- # wait 1082162 00:06:11.082 00:06:11.082 real 0m2.215s 00:06:11.082 user 0m6.455s 00:06:11.082 sys 0m0.475s 00:06:11.082 19:15:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.082 19:15:09 -- common/autotest_common.sh@10 -- # set +x 00:06:11.082 ************************************ 00:06:11.082 END TEST locking_overlapped_coremask 00:06:11.082 ************************************ 00:06:11.082 19:15:09 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.082 19:15:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.082 19:15:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.082 19:15:09 -- common/autotest_common.sh@10 -- # set +x 00:06:11.082 ************************************ 00:06:11.082 START TEST locking_overlapped_coremask_via_rpc 00:06:11.082 ************************************ 00:06:11.082 19:15:09 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:11.082 19:15:09 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1082969 00:06:11.082 19:15:09 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.082 19:15:09 -- event/cpu_locks.sh@149 -- # waitforlisten 1082969 /var/tmp/spdk.sock 00:06:11.082 19:15:09 -- common/autotest_common.sh@829 -- # '[' -z 1082969 ']' 00:06:11.082 19:15:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.082 19:15:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.082 19:15:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.082 19:15:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.082 19:15:09 -- common/autotest_common.sh@10 -- # set +x 00:06:11.082 [2024-11-17 19:15:09.143238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:11.082 [2024-11-17 19:15:09.143316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082969 ] 00:06:11.082 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.082 [2024-11-17 19:15:09.204425] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.082 [2024-11-17 19:15:09.204469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.082 [2024-11-17 19:15:09.297481] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.082 [2024-11-17 19:15:09.299717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.082 [2024-11-17 19:15:09.299762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.082 [2024-11-17 19:15:09.299766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.023 19:15:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.023 19:15:10 -- common/autotest_common.sh@862 -- # return 0 00:06:12.023 19:15:10 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1083124 00:06:12.023 19:15:10 -- event/cpu_locks.sh@153 -- # waitforlisten 1083124 /var/tmp/spdk2.sock 00:06:12.023 19:15:10 -- common/autotest_common.sh@829 -- # '[' -z 1083124 ']' 00:06:12.023 19:15:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.023 19:15:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.023 19:15:10 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:12.023 19:15:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.023 19:15:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.023 19:15:10 -- common/autotest_common.sh@10 -- # set +x 00:06:12.023 [2024-11-17 19:15:10.205457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:12.023 [2024-11-17 19:15:10.205541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083124 ] 00:06:12.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.281 [2024-11-17 19:15:10.293052] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.281 [2024-11-17 19:15:10.293085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.281 [2024-11-17 19:15:10.470007] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.281 [2024-11-17 19:15:10.470216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.281 [2024-11-17 19:15:10.473725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.281 [2024-11-17 19:15:10.473727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.214 19:15:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.214 19:15:11 -- common/autotest_common.sh@862 -- # return 0 00:06:13.214 19:15:11 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.214 19:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.214 19:15:11 -- common/autotest_common.sh@10 -- # set +x 00:06:13.214 19:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.214 19:15:11 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.214 19:15:11 -- common/autotest_common.sh@650 -- # local es=0 00:06:13.214 19:15:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.214 19:15:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:13.214 19:15:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.214 19:15:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:13.214 19:15:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.214 19:15:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.214 19:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.214 19:15:11 -- common/autotest_common.sh@10 -- # set +x 00:06:13.214 [2024-11-17 19:15:11.187777] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1082969 has claimed it. 00:06:13.214 request: 00:06:13.214 { 00:06:13.214 "method": "framework_enable_cpumask_locks", 00:06:13.214 "req_id": 1 00:06:13.214 } 00:06:13.214 Got JSON-RPC error response 00:06:13.214 response: 00:06:13.214 { 00:06:13.214 "code": -32603, 00:06:13.214 "message": "Failed to claim CPU core: 2" 00:06:13.214 } 00:06:13.214 19:15:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:13.214 19:15:11 -- common/autotest_common.sh@653 -- # es=1 00:06:13.214 19:15:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.214 19:15:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.214 19:15:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.214 19:15:11 -- event/cpu_locks.sh@158 -- # waitforlisten 1082969 /var/tmp/spdk.sock 00:06:13.214 19:15:11 -- common/autotest_common.sh@829 -- # '[' -z 1082969 ']' 00:06:13.214 19:15:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.214 19:15:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.214 19:15:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.214 19:15:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.214 19:15:11 -- common/autotest_common.sh@10 -- # set +x 00:06:13.214 19:15:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.214 19:15:11 -- common/autotest_common.sh@862 -- # return 0 00:06:13.214 19:15:11 -- event/cpu_locks.sh@159 -- # waitforlisten 1083124 /var/tmp/spdk2.sock 00:06:13.214 19:15:11 -- common/autotest_common.sh@829 -- # '[' -z 1083124 ']' 00:06:13.214 19:15:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.214 19:15:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.214 19:15:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.214 19:15:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.214 19:15:11 -- common/autotest_common.sh@10 -- # set +x 00:06:13.473 19:15:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.473 19:15:11 -- common/autotest_common.sh@862 -- # return 0 00:06:13.473 19:15:11 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.473 19:15:11 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.473 19:15:11 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.473 19:15:11 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.473 00:06:13.473 real 0m2.603s 00:06:13.473 user 0m1.309s 00:06:13.473 sys 0m0.218s 00:06:13.473 19:15:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.473 19:15:11 -- common/autotest_common.sh@10 -- # set +x 00:06:13.473 ************************************ 00:06:13.473 END TEST locking_overlapped_coremask_via_rpc 00:06:13.473 ************************************ 00:06:13.473 19:15:11 -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.473 19:15:11 -- event/cpu_locks.sh@15 -- # [[ -z 1082969 ]] 00:06:13.473 19:15:11 -- event/cpu_locks.sh@15 -- # killprocess 1082969 00:06:13.473 19:15:11 -- common/autotest_common.sh@936 -- # '[' -z 1082969 ']' 00:06:13.473 19:15:11 -- common/autotest_common.sh@940 -- # kill -0 1082969 00:06:13.473 19:15:11 -- common/autotest_common.sh@941 -- # uname 00:06:13.473 19:15:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.473 19:15:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1082969 00:06:13.731 19:15:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.731 19:15:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.731 19:15:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1082969' 00:06:13.731 killing process with pid 1082969 00:06:13.731 19:15:11 -- common/autotest_common.sh@955 -- # kill 1082969 00:06:13.731 19:15:11 -- common/autotest_common.sh@960 -- # wait 1082969 00:06:13.989 19:15:12 -- event/cpu_locks.sh@16 -- # [[ -z 1083124 ]] 00:06:13.989 19:15:12 -- event/cpu_locks.sh@16 -- # killprocess 1083124 00:06:13.989 19:15:12 -- common/autotest_common.sh@936 -- # '[' -z 1083124 ']' 00:06:13.989 19:15:12 -- common/autotest_common.sh@940 -- # kill -0 1083124 00:06:13.989 19:15:12 -- common/autotest_common.sh@941 -- # uname 00:06:13.989 19:15:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.989 19:15:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1083124 00:06:13.989 19:15:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:13.989 19:15:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:13.989 19:15:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1083124' 00:06:13.989 killing process with pid 1083124 00:06:13.989 19:15:12 -- common/autotest_common.sh@955 -- # kill 1083124 00:06:13.989 19:15:12 -- common/autotest_common.sh@960 -- # wait 1083124 00:06:14.556 19:15:12 -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.556 19:15:12 -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.556 19:15:12 -- event/cpu_locks.sh@15 -- # [[ -z 1082969 ]] 00:06:14.556 19:15:12 -- event/cpu_locks.sh@15 -- # killprocess 1082969 00:06:14.556 19:15:12 -- common/autotest_common.sh@936 -- # '[' -z 1082969 ']' 00:06:14.556 19:15:12 -- common/autotest_common.sh@940 -- # kill -0 1082969 00:06:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1082969) - No such process 00:06:14.556 19:15:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1082969 is not found' 00:06:14.556 Process with pid 1082969 is not found 00:06:14.556 19:15:12 -- event/cpu_locks.sh@16 -- # [[ -z 1083124 ]] 00:06:14.556 19:15:12 -- event/cpu_locks.sh@16 -- # killprocess 1083124 00:06:14.556 19:15:12 -- common/autotest_common.sh@936 -- # '[' -z 1083124 ']' 00:06:14.556 19:15:12 -- common/autotest_common.sh@940 -- # kill -0 1083124 00:06:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1083124) - No such process 00:06:14.556 19:15:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1083124 is not found' 00:06:14.556 Process with pid 1083124 is not found 00:06:14.556 19:15:12 -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.556 00:06:14.556 real 0m19.382s 00:06:14.556 user 0m35.424s 00:06:14.556 sys 0m5.453s 00:06:14.556 19:15:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.556 19:15:12 -- common/autotest_common.sh@10 -- # set +x 00:06:14.556 ************************************ 00:06:14.556 END TEST cpu_locks 00:06:14.556 ************************************ 00:06:14.556 00:06:14.556 real 0m46.122s 00:06:14.556 user 1m29.292s 00:06:14.556 sys 0m9.391s 00:06:14.556 19:15:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.556 19:15:12 -- common/autotest_common.sh@10 -- # set +x 00:06:14.556 ************************************ 00:06:14.556 END TEST event 00:06:14.556 ************************************ 00:06:14.556 19:15:12 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.556 19:15:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.556 19:15:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.556 19:15:12 -- common/autotest_common.sh@10 -- # set +x 00:06:14.556 ************************************ 00:06:14.556 START TEST thread 00:06:14.556 ************************************ 00:06:14.556 19:15:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.556 * Looking for test storage... 00:06:14.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:14.556 19:15:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:14.556 19:15:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:14.556 19:15:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:14.556 19:15:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:14.556 19:15:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:14.556 19:15:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:14.556 19:15:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:14.556 19:15:12 -- scripts/common.sh@335 -- # IFS=.-: 00:06:14.556 19:15:12 -- scripts/common.sh@335 -- # read -ra ver1 00:06:14.556 19:15:12 -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.556 19:15:12 -- scripts/common.sh@336 -- # read -ra ver2 00:06:14.556 19:15:12 -- scripts/common.sh@337 -- # local 'op=<' 00:06:14.556 19:15:12 -- scripts/common.sh@339 -- # ver1_l=2 00:06:14.556 19:15:12 -- scripts/common.sh@340 -- # ver2_l=1 00:06:14.556 19:15:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:14.556 19:15:12 -- scripts/common.sh@343 -- # case "$op" in 00:06:14.556 19:15:12 -- scripts/common.sh@344 -- # : 1 00:06:14.556 19:15:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:14.556 19:15:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.556 19:15:12 -- scripts/common.sh@364 -- # decimal 1 00:06:14.556 19:15:12 -- scripts/common.sh@352 -- # local d=1 00:06:14.556 19:15:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.556 19:15:12 -- scripts/common.sh@354 -- # echo 1 00:06:14.556 19:15:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:14.556 19:15:12 -- scripts/common.sh@365 -- # decimal 2 00:06:14.556 19:15:12 -- scripts/common.sh@352 -- # local d=2 00:06:14.556 19:15:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.556 19:15:12 -- scripts/common.sh@354 -- # echo 2 00:06:14.556 19:15:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:14.556 19:15:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:14.556 19:15:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:14.556 19:15:12 -- scripts/common.sh@367 -- # return 0 00:06:14.556 19:15:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.556 19:15:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:14.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.556 --rc genhtml_branch_coverage=1 00:06:14.556 --rc genhtml_function_coverage=1 00:06:14.556 --rc genhtml_legend=1 00:06:14.556 --rc geninfo_all_blocks=1 00:06:14.556 --rc geninfo_unexecuted_blocks=1 00:06:14.556 00:06:14.556 ' 00:06:14.556 19:15:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:14.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.556 --rc genhtml_branch_coverage=1 00:06:14.556 --rc genhtml_function_coverage=1 00:06:14.556 --rc genhtml_legend=1 00:06:14.556 --rc geninfo_all_blocks=1 00:06:14.556 --rc geninfo_unexecuted_blocks=1 00:06:14.556 00:06:14.556 ' 00:06:14.556 19:15:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:14.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.556 --rc genhtml_branch_coverage=1 00:06:14.556 --rc genhtml_function_coverage=1 00:06:14.556 --rc genhtml_legend=1 00:06:14.556 --rc geninfo_all_blocks=1 00:06:14.556 --rc geninfo_unexecuted_blocks=1 00:06:14.556 00:06:14.556 ' 00:06:14.556 19:15:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:14.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.556 --rc genhtml_branch_coverage=1 00:06:14.556 --rc genhtml_function_coverage=1 00:06:14.556 --rc genhtml_legend=1 00:06:14.556 --rc geninfo_all_blocks=1 00:06:14.556 --rc geninfo_unexecuted_blocks=1 00:06:14.556 00:06:14.556 ' 00:06:14.556 19:15:12 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.556 19:15:12 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:14.556 19:15:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.556 19:15:12 -- common/autotest_common.sh@10 -- # set +x 00:06:14.556 ************************************ 00:06:14.556 START TEST thread_poller_perf 00:06:14.556 ************************************ 00:06:14.556 19:15:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.556 [2024-11-17 19:15:12.801959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:14.556 [2024-11-17 19:15:12.802052] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083502 ] 00:06:14.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.815 [2024-11-17 19:15:12.864686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.815 [2024-11-17 19:15:12.954827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.815 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.196 [2024-11-17T18:15:14.463Z] ====================================== 00:06:16.196 [2024-11-17T18:15:14.463Z] busy:2711342140 (cyc) 00:06:16.196 [2024-11-17T18:15:14.463Z] total_run_count: 279000 00:06:16.196 [2024-11-17T18:15:14.463Z] tsc_hz: 2700000000 (cyc) 00:06:16.196 [2024-11-17T18:15:14.463Z] ====================================== 00:06:16.196 [2024-11-17T18:15:14.463Z] poller_cost: 9718 (cyc), 3599 (nsec) 00:06:16.196 00:06:16.196 real 0m1.250s 00:06:16.196 user 0m1.163s 00:06:16.196 sys 0m0.081s 00:06:16.196 19:15:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.196 19:15:14 -- common/autotest_common.sh@10 -- # set +x 00:06:16.196 ************************************ 00:06:16.196 END TEST thread_poller_perf 00:06:16.196 ************************************ 00:06:16.197 19:15:14 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.197 19:15:14 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:16.197 19:15:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.197 19:15:14 -- common/autotest_common.sh@10 -- # set +x 00:06:16.197 ************************************ 00:06:16.197 START TEST thread_poller_perf 00:06:16.197 ************************************ 00:06:16.197 19:15:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.197 [2024-11-17 19:15:14.076343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:16.197 [2024-11-17 19:15:14.076410] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083656 ] 00:06:16.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.197 [2024-11-17 19:15:14.136815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.197 [2024-11-17 19:15:14.227542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.197 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.199 [2024-11-17T18:15:15.466Z] ====================================== 00:06:17.199 [2024-11-17T18:15:15.466Z] busy:2703318460 (cyc) 00:06:17.199 [2024-11-17T18:15:15.466Z] total_run_count: 3831000 00:06:17.199 [2024-11-17T18:15:15.466Z] tsc_hz: 2700000000 (cyc) 00:06:17.199 [2024-11-17T18:15:15.466Z] ====================================== 00:06:17.199 [2024-11-17T18:15:15.466Z] poller_cost: 705 (cyc), 261 (nsec) 00:06:17.199 00:06:17.199 real 0m1.243s 00:06:17.199 user 0m1.153s 00:06:17.199 sys 0m0.084s 00:06:17.199 19:15:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.199 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.199 ************************************ 00:06:17.199 END TEST thread_poller_perf 00:06:17.199 ************************************ 00:06:17.199 19:15:15 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:17.199 00:06:17.199 real 0m2.678s 00:06:17.199 user 0m2.426s 00:06:17.199 sys 0m0.255s 00:06:17.199 19:15:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.199 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.199 ************************************ 00:06:17.199 END TEST thread 00:06:17.199 ************************************ 00:06:17.199 19:15:15 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:17.199 19:15:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.199 19:15:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.199 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.199 ************************************ 00:06:17.199 START TEST accel 00:06:17.199 ************************************ 00:06:17.199 19:15:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:17.199 * Looking for test storage... 00:06:17.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:17.199 19:15:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:17.199 19:15:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:17.199 19:15:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:17.457 19:15:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:17.457 19:15:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:17.457 19:15:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:17.457 19:15:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:17.457 19:15:15 -- scripts/common.sh@335 -- # IFS=.-: 00:06:17.457 19:15:15 -- scripts/common.sh@335 -- # read -ra ver1 00:06:17.457 19:15:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.457 19:15:15 -- scripts/common.sh@336 -- # read -ra ver2 00:06:17.457 19:15:15 -- scripts/common.sh@337 -- # local 'op=<' 00:06:17.457 19:15:15 -- scripts/common.sh@339 -- # ver1_l=2 00:06:17.457 19:15:15 -- scripts/common.sh@340 -- # ver2_l=1 00:06:17.457 19:15:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:17.457 19:15:15 -- scripts/common.sh@343 -- # case "$op" in 00:06:17.457 19:15:15 -- scripts/common.sh@344 -- # : 1 00:06:17.457 19:15:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:17.457 19:15:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.457 19:15:15 -- scripts/common.sh@364 -- # decimal 1 00:06:17.457 19:15:15 -- scripts/common.sh@352 -- # local d=1 00:06:17.457 19:15:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.457 19:15:15 -- scripts/common.sh@354 -- # echo 1 00:06:17.457 19:15:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:17.457 19:15:15 -- scripts/common.sh@365 -- # decimal 2 00:06:17.457 19:15:15 -- scripts/common.sh@352 -- # local d=2 00:06:17.457 19:15:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.457 19:15:15 -- scripts/common.sh@354 -- # echo 2 00:06:17.457 19:15:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:17.457 19:15:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:17.457 19:15:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:17.457 19:15:15 -- scripts/common.sh@367 -- # return 0 00:06:17.457 19:15:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.457 19:15:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:17.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.457 --rc genhtml_branch_coverage=1 00:06:17.457 --rc genhtml_function_coverage=1 00:06:17.457 --rc genhtml_legend=1 00:06:17.457 --rc geninfo_all_blocks=1 00:06:17.457 --rc geninfo_unexecuted_blocks=1 00:06:17.457 00:06:17.457 ' 00:06:17.458 19:15:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:17.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.458 --rc genhtml_branch_coverage=1 00:06:17.458 --rc genhtml_function_coverage=1 00:06:17.458 --rc genhtml_legend=1 00:06:17.458 --rc geninfo_all_blocks=1 00:06:17.458 --rc geninfo_unexecuted_blocks=1 00:06:17.458 00:06:17.458 ' 00:06:17.458 19:15:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:17.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.458 --rc genhtml_branch_coverage=1 00:06:17.458 --rc genhtml_function_coverage=1 00:06:17.458 --rc genhtml_legend=1 00:06:17.458 --rc geninfo_all_blocks=1 00:06:17.458 --rc geninfo_unexecuted_blocks=1 00:06:17.458 00:06:17.458 ' 00:06:17.458 19:15:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:17.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.458 --rc genhtml_branch_coverage=1 00:06:17.458 --rc genhtml_function_coverage=1 00:06:17.458 --rc genhtml_legend=1 00:06:17.458 --rc geninfo_all_blocks=1 00:06:17.458 --rc geninfo_unexecuted_blocks=1 00:06:17.458 00:06:17.458 ' 00:06:17.458 19:15:15 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:17.458 19:15:15 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:17.458 19:15:15 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.458 19:15:15 -- accel/accel.sh@59 -- # spdk_tgt_pid=1083869 00:06:17.458 19:15:15 -- accel/accel.sh@60 -- # waitforlisten 1083869 00:06:17.458 19:15:15 -- common/autotest_common.sh@829 -- # '[' -z 1083869 ']' 00:06:17.458 19:15:15 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:17.458 19:15:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.458 19:15:15 -- accel/accel.sh@58 -- # build_accel_config 00:06:17.458 19:15:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.458 19:15:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.458 19:15:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.458 19:15:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.458 19:15:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.458 19:15:15 -- common/autotest_common.sh@10 -- # set +x 00:06:17.458 19:15:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.458 19:15:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.458 19:15:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.458 19:15:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.458 19:15:15 -- accel/accel.sh@42 -- # jq -r . 00:06:17.458 [2024-11-17 19:15:15.549369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.458 [2024-11-17 19:15:15.549464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083869 ] 00:06:17.458 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.458 [2024-11-17 19:15:15.615727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.458 [2024-11-17 19:15:15.707587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.458 [2024-11-17 19:15:15.707792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.393 19:15:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.393 19:15:16 -- common/autotest_common.sh@862 -- # return 0 00:06:18.393 19:15:16 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:18.393 19:15:16 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:18.393 19:15:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.393 19:15:16 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:18.393 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.393 19:15:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # IFS== 00:06:18.393 19:15:16 -- accel/accel.sh@64 -- # read -r opc module 00:06:18.393 19:15:16 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:18.393 19:15:16 -- accel/accel.sh@67 -- # killprocess 1083869 00:06:18.393 19:15:16 -- common/autotest_common.sh@936 -- # '[' -z 1083869 ']' 00:06:18.393 19:15:16 -- common/autotest_common.sh@940 -- # kill -0 1083869 00:06:18.393 19:15:16 -- common/autotest_common.sh@941 -- # uname 00:06:18.393 19:15:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.393 19:15:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1083869 00:06:18.393 19:15:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.394 19:15:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.394 19:15:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1083869' 00:06:18.394 killing process with pid 1083869 00:06:18.394 19:15:16 -- common/autotest_common.sh@955 -- # kill 1083869 00:06:18.394 19:15:16 -- common/autotest_common.sh@960 -- # wait 1083869 00:06:18.960 19:15:16 -- accel/accel.sh@68 -- # trap - ERR 00:06:18.960 19:15:16 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:18.960 19:15:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:18.960 19:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.960 19:15:16 -- common/autotest_common.sh@10 -- # set +x 00:06:18.960 19:15:16 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:18.960 19:15:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:18.960 19:15:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.960 19:15:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.960 19:15:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.960 19:15:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.960 19:15:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.960 19:15:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.960 19:15:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.960 19:15:16 -- accel/accel.sh@42 -- # jq -r . 00:06:18.960 19:15:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.960 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:18.960 19:15:17 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:18.960 19:15:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:18.960 19:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.960 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:18.960 ************************************ 00:06:18.960 START TEST accel_missing_filename 00:06:18.960 ************************************ 00:06:18.960 19:15:17 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:18.960 19:15:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:18.960 19:15:17 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:18.960 19:15:17 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:18.960 19:15:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.960 19:15:17 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:18.960 19:15:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.960 19:15:17 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:18.960 19:15:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:18.960 19:15:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.960 19:15:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.960 19:15:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.960 19:15:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.960 19:15:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.960 19:15:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.960 19:15:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.960 19:15:17 -- accel/accel.sh@42 -- # jq -r . 00:06:18.960 [2024-11-17 19:15:17.059619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.960 [2024-11-17 19:15:17.059723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084163 ] 00:06:18.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.960 [2024-11-17 19:15:17.121553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.960 [2024-11-17 19:15:17.212395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.218 [2024-11-17 19:15:17.274189] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.218 [2024-11-17 19:15:17.359708] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:19.218 A filename is required. 00:06:19.218 19:15:17 -- common/autotest_common.sh@653 -- # es=234 00:06:19.218 19:15:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.218 19:15:17 -- common/autotest_common.sh@662 -- # es=106 00:06:19.218 19:15:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.218 19:15:17 -- common/autotest_common.sh@670 -- # es=1 00:06:19.218 19:15:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.218 00:06:19.218 real 0m0.397s 00:06:19.218 user 0m0.287s 00:06:19.218 sys 0m0.145s 00:06:19.218 19:15:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.218 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.218 ************************************ 00:06:19.218 END TEST accel_missing_filename 00:06:19.218 ************************************ 00:06:19.218 19:15:17 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.218 19:15:17 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:19.218 19:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.218 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.218 ************************************ 00:06:19.218 START TEST accel_compress_verify 00:06:19.218 ************************************ 00:06:19.218 19:15:17 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.218 19:15:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.218 19:15:17 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.218 19:15:17 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:19.218 19:15:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.218 19:15:17 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:19.218 19:15:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.218 19:15:17 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.218 19:15:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.219 19:15:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.219 19:15:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.219 19:15:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.219 19:15:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.219 19:15:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.219 19:15:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.219 19:15:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.219 19:15:17 -- accel/accel.sh@42 -- # jq -r . 00:06:19.219 [2024-11-17 19:15:17.480445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:19.219 [2024-11-17 19:15:17.480523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084194 ] 00:06:19.476 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.476 [2024-11-17 19:15:17.544533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.476 [2024-11-17 19:15:17.635284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.476 [2024-11-17 19:15:17.697144] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.735 [2024-11-17 19:15:17.784436] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:19.735 00:06:19.735 Compression does not support the verify option, aborting. 00:06:19.735 19:15:17 -- common/autotest_common.sh@653 -- # es=161 00:06:19.735 19:15:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.735 19:15:17 -- common/autotest_common.sh@662 -- # es=33 00:06:19.735 19:15:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.735 19:15:17 -- common/autotest_common.sh@670 -- # es=1 00:06:19.735 19:15:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.735 00:06:19.735 real 0m0.397s 00:06:19.735 user 0m0.281s 00:06:19.735 sys 0m0.148s 00:06:19.735 19:15:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.735 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.735 ************************************ 00:06:19.735 END TEST accel_compress_verify 00:06:19.735 ************************************ 00:06:19.735 19:15:17 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:19.735 19:15:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.735 19:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.735 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.735 ************************************ 00:06:19.735 START TEST accel_wrong_workload 00:06:19.735 ************************************ 00:06:19.735 19:15:17 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:19.735 19:15:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.735 19:15:17 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:19.735 19:15:17 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:19.735 19:15:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.735 19:15:17 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:19.735 19:15:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.735 19:15:17 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:19.735 19:15:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:19.735 19:15:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.735 19:15:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.735 19:15:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.735 19:15:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.735 19:15:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.735 19:15:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.735 19:15:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.735 19:15:17 -- accel/accel.sh@42 -- # jq -r . 00:06:19.735 Unsupported workload type: foobar 00:06:19.735 [2024-11-17 19:15:17.901253] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:19.735 accel_perf options: 00:06:19.735 [-h help message] 00:06:19.735 [-q queue depth per core] 00:06:19.735 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:19.735 [-T number of threads per core 00:06:19.735 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:19.735 [-t time in seconds] 00:06:19.735 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:19.735 [ dif_verify, , dif_generate, dif_generate_copy 00:06:19.735 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:19.735 [-l for compress/decompress workloads, name of uncompressed input file 00:06:19.735 [-S for crc32c workload, use this seed value (default 0) 00:06:19.735 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:19.735 [-f for fill workload, use this BYTE value (default 255) 00:06:19.735 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:19.735 [-y verify result if this switch is on] 00:06:19.735 [-a tasks to allocate per core (default: same value as -q)] 00:06:19.735 Can be used to spread operations across a wider range of memory. 00:06:19.735 19:15:17 -- common/autotest_common.sh@653 -- # es=1 00:06:19.735 19:15:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.735 19:15:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.735 19:15:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.735 00:06:19.735 real 0m0.020s 00:06:19.735 user 0m0.009s 00:06:19.735 sys 0m0.011s 00:06:19.735 19:15:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.735 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.735 ************************************ 00:06:19.735 END TEST accel_wrong_workload 00:06:19.735 ************************************ 00:06:19.735 Error: writing output failed: Broken pipe 00:06:19.735 19:15:17 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:19.735 19:15:17 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:19.735 19:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.735 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.735 ************************************ 00:06:19.735 START TEST accel_negative_buffers 00:06:19.735 ************************************ 00:06:19.735 19:15:17 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:19.735 19:15:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.735 19:15:17 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:19.735 19:15:17 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:19.735 19:15:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.735 19:15:17 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:19.735 19:15:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.735 19:15:17 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:19.735 19:15:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:19.736 19:15:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.736 19:15:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.736 19:15:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.736 19:15:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.736 19:15:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.736 19:15:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.736 19:15:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.736 19:15:17 -- accel/accel.sh@42 -- # jq -r . 00:06:19.736 -x option must be non-negative. 00:06:19.736 [2024-11-17 19:15:17.947426] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:19.736 accel_perf options: 00:06:19.736 [-h help message] 00:06:19.736 [-q queue depth per core] 00:06:19.736 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:19.736 [-T number of threads per core 00:06:19.736 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:19.736 [-t time in seconds] 00:06:19.736 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:19.736 [ dif_verify, , dif_generate, dif_generate_copy 00:06:19.736 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:19.736 [-l for compress/decompress workloads, name of uncompressed input file 00:06:19.736 [-S for crc32c workload, use this seed value (default 0) 00:06:19.736 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:19.736 [-f for fill workload, use this BYTE value (default 255) 00:06:19.736 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:19.736 [-y verify result if this switch is on] 00:06:19.736 [-a tasks to allocate per core (default: same value as -q)] 00:06:19.736 Can be used to spread operations across a wider range of memory. 00:06:19.736 19:15:17 -- common/autotest_common.sh@653 -- # es=1 00:06:19.736 19:15:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.736 19:15:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.736 19:15:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.736 00:06:19.736 real 0m0.021s 00:06:19.736 user 0m0.013s 00:06:19.736 sys 0m0.009s 00:06:19.736 19:15:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.736 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.736 ************************************ 00:06:19.736 END TEST accel_negative_buffers 00:06:19.736 ************************************ 00:06:19.736 Error: writing output failed: Broken pipe 00:06:19.736 19:15:17 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:19.736 19:15:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:19.736 19:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.736 19:15:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.736 ************************************ 00:06:19.736 START TEST accel_crc32c 00:06:19.736 ************************************ 00:06:19.736 19:15:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:19.736 19:15:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.736 19:15:17 -- accel/accel.sh@17 -- # local accel_module 00:06:19.736 19:15:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:19.736 19:15:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:19.736 19:15:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.736 19:15:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.736 19:15:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.736 19:15:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.736 19:15:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.736 19:15:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.736 19:15:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.736 19:15:17 -- accel/accel.sh@42 -- # jq -r . 00:06:19.736 [2024-11-17 19:15:17.991024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:19.736 [2024-11-17 19:15:17.991088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084369 ] 00:06:19.995 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.995 [2024-11-17 19:15:18.054785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.995 [2024-11-17 19:15:18.145671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.370 19:15:19 -- accel/accel.sh@18 -- # out=' 00:06:21.370 SPDK Configuration: 00:06:21.370 Core mask: 0x1 00:06:21.370 00:06:21.370 Accel Perf Configuration: 00:06:21.370 Workload Type: crc32c 00:06:21.370 CRC-32C seed: 32 00:06:21.370 Transfer size: 4096 bytes 00:06:21.370 Vector count 1 00:06:21.370 Module: software 00:06:21.370 Queue depth: 32 00:06:21.370 Allocate depth: 32 00:06:21.370 # threads/core: 1 00:06:21.370 Run time: 1 seconds 00:06:21.370 Verify: Yes 00:06:21.370 00:06:21.370 Running for 1 seconds... 00:06:21.370 00:06:21.370 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.370 ------------------------------------------------------------------------------------ 00:06:21.370 0,0 404640/s 1580 MiB/s 0 0 00:06:21.370 ==================================================================================== 00:06:21.370 Total 404640/s 1580 MiB/s 0 0' 00:06:21.370 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.370 19:15:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:21.370 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.370 19:15:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:21.370 19:15:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.370 19:15:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.370 19:15:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.370 19:15:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.370 19:15:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.370 19:15:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.370 19:15:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.370 19:15:19 -- accel/accel.sh@42 -- # jq -r . 00:06:21.370 [2024-11-17 19:15:19.386608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.371 [2024-11-17 19:15:19.386701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084516 ] 00:06:21.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.371 [2024-11-17 19:15:19.448308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.371 [2024-11-17 19:15:19.538326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val= 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val= 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val=0x1 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val= 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val= 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val=crc32c 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val=32 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val= 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val=software 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val=32 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val=32 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val=1 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val=Yes 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val= 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:21.371 19:15:19 -- accel/accel.sh@21 -- # val= 00:06:21.371 19:15:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # IFS=: 00:06:21.371 19:15:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.745 19:15:20 -- accel/accel.sh@21 -- # val= 00:06:22.745 19:15:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.745 19:15:20 -- accel/accel.sh@21 -- # val= 00:06:22.745 19:15:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.745 19:15:20 -- accel/accel.sh@21 -- # val= 00:06:22.745 19:15:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.745 19:15:20 -- accel/accel.sh@21 -- # val= 00:06:22.745 19:15:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.745 19:15:20 -- accel/accel.sh@21 -- # val= 00:06:22.745 19:15:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.745 19:15:20 -- accel/accel.sh@21 -- # val= 00:06:22.745 19:15:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # IFS=: 00:06:22.745 19:15:20 -- accel/accel.sh@20 -- # read -r var val 00:06:22.745 19:15:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.745 19:15:20 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:22.745 19:15:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.745 00:06:22.745 real 0m2.803s 00:06:22.745 user 0m2.502s 00:06:22.745 sys 0m0.294s 00:06:22.745 19:15:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.745 19:15:20 -- common/autotest_common.sh@10 -- # set +x 00:06:22.745 ************************************ 00:06:22.745 END TEST accel_crc32c 00:06:22.745 ************************************ 00:06:22.745 19:15:20 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:22.745 19:15:20 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:22.745 19:15:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.745 19:15:20 -- common/autotest_common.sh@10 -- # set +x 00:06:22.745 ************************************ 00:06:22.745 START TEST accel_crc32c_C2 00:06:22.745 ************************************ 00:06:22.745 19:15:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:22.745 19:15:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.745 19:15:20 -- accel/accel.sh@17 -- # local accel_module 00:06:22.745 19:15:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:22.745 19:15:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:22.745 19:15:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.745 19:15:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.745 19:15:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.745 19:15:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.745 19:15:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.745 19:15:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.745 19:15:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.745 19:15:20 -- accel/accel.sh@42 -- # jq -r . 00:06:22.745 [2024-11-17 19:15:20.820832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:22.745 [2024-11-17 19:15:20.820903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084672 ] 00:06:22.745 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.745 [2024-11-17 19:15:20.881619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.745 [2024-11-17 19:15:20.969419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.119 19:15:22 -- accel/accel.sh@18 -- # out=' 00:06:24.119 SPDK Configuration: 00:06:24.119 Core mask: 0x1 00:06:24.119 00:06:24.119 Accel Perf Configuration: 00:06:24.119 Workload Type: crc32c 00:06:24.119 CRC-32C seed: 0 00:06:24.119 Transfer size: 4096 bytes 00:06:24.119 Vector count 2 00:06:24.119 Module: software 00:06:24.119 Queue depth: 32 00:06:24.119 Allocate depth: 32 00:06:24.119 # threads/core: 1 00:06:24.119 Run time: 1 seconds 00:06:24.119 Verify: Yes 00:06:24.119 00:06:24.119 Running for 1 seconds... 00:06:24.119 00:06:24.119 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.119 ------------------------------------------------------------------------------------ 00:06:24.119 0,0 315232/s 2462 MiB/s 0 0 00:06:24.119 ==================================================================================== 00:06:24.119 Total 315232/s 1231 MiB/s 0 0' 00:06:24.119 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.119 19:15:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:24.119 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.119 19:15:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:24.119 19:15:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.119 19:15:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.119 19:15:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.119 19:15:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.120 19:15:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.120 19:15:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.120 19:15:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.120 19:15:22 -- accel/accel.sh@42 -- # jq -r . 00:06:24.120 [2024-11-17 19:15:22.217916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:24.120 [2024-11-17 19:15:22.218003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084814 ] 00:06:24.120 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.120 [2024-11-17 19:15:22.278801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.120 [2024-11-17 19:15:22.369144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val= 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val= 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val=0x1 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val= 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val= 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val=crc32c 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val=0 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val= 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val=software 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val=32 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val=32 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val=1 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.378 19:15:22 -- accel/accel.sh@21 -- # val=Yes 00:06:24.378 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.378 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.379 19:15:22 -- accel/accel.sh@21 -- # val= 00:06:24.379 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.379 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.379 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:24.379 19:15:22 -- accel/accel.sh@21 -- # val= 00:06:24.379 19:15:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.379 19:15:22 -- accel/accel.sh@20 -- # IFS=: 00:06:24.379 19:15:22 -- accel/accel.sh@20 -- # read -r var val 00:06:25.752 19:15:23 -- accel/accel.sh@21 -- # val= 00:06:25.752 19:15:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.752 19:15:23 -- accel/accel.sh@21 -- # val= 00:06:25.752 19:15:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.752 19:15:23 -- accel/accel.sh@21 -- # val= 00:06:25.752 19:15:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.752 19:15:23 -- accel/accel.sh@21 -- # val= 00:06:25.752 19:15:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.752 19:15:23 -- accel/accel.sh@21 -- # val= 00:06:25.752 19:15:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.752 19:15:23 -- accel/accel.sh@21 -- # val= 00:06:25.752 19:15:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.752 19:15:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.752 19:15:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.752 19:15:23 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:25.753 19:15:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.753 00:06:25.753 real 0m2.793s 00:06:25.753 user 0m2.506s 00:06:25.753 sys 0m0.278s 00:06:25.753 19:15:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.753 19:15:23 -- common/autotest_common.sh@10 -- # set +x 00:06:25.753 ************************************ 00:06:25.753 END TEST accel_crc32c_C2 00:06:25.753 ************************************ 00:06:25.753 19:15:23 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:25.753 19:15:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:25.753 19:15:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.753 19:15:23 -- common/autotest_common.sh@10 -- # set +x 00:06:25.753 ************************************ 00:06:25.753 START TEST accel_copy 00:06:25.753 ************************************ 00:06:25.753 19:15:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:25.753 19:15:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.753 19:15:23 -- accel/accel.sh@17 -- # local accel_module 00:06:25.753 19:15:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:25.753 19:15:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:25.753 19:15:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.753 19:15:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.753 19:15:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.753 19:15:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.753 19:15:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.753 19:15:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.753 19:15:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.753 19:15:23 -- accel/accel.sh@42 -- # jq -r . 00:06:25.753 [2024-11-17 19:15:23.639263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:25.753 [2024-11-17 19:15:23.639343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085094 ] 00:06:25.753 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.753 [2024-11-17 19:15:23.702080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.753 [2024-11-17 19:15:23.791508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.127 19:15:25 -- accel/accel.sh@18 -- # out=' 00:06:27.127 SPDK Configuration: 00:06:27.127 Core mask: 0x1 00:06:27.127 00:06:27.127 Accel Perf Configuration: 00:06:27.127 Workload Type: copy 00:06:27.127 Transfer size: 4096 bytes 00:06:27.127 Vector count 1 00:06:27.127 Module: software 00:06:27.127 Queue depth: 32 00:06:27.127 Allocate depth: 32 00:06:27.127 # threads/core: 1 00:06:27.127 Run time: 1 seconds 00:06:27.127 Verify: Yes 00:06:27.127 00:06:27.127 Running for 1 seconds... 00:06:27.127 00:06:27.127 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.127 ------------------------------------------------------------------------------------ 00:06:27.127 0,0 284992/s 1113 MiB/s 0 0 00:06:27.127 ==================================================================================== 00:06:27.127 Total 284992/s 1113 MiB/s 0 0' 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:27.127 19:15:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.127 19:15:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.127 19:15:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.127 19:15:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.127 19:15:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.127 19:15:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.127 19:15:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.127 19:15:25 -- accel/accel.sh@42 -- # jq -r . 00:06:27.127 [2024-11-17 19:15:25.034615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.127 [2024-11-17 19:15:25.034702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085236 ] 00:06:27.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.127 [2024-11-17 19:15:25.095453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.127 [2024-11-17 19:15:25.185579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val= 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val= 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val=0x1 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val= 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val= 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val=copy 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val= 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val=software 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val=32 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val=32 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val=1 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.127 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.127 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.127 19:15:25 -- accel/accel.sh@21 -- # val=Yes 00:06:27.128 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.128 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.128 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.128 19:15:25 -- accel/accel.sh@21 -- # val= 00:06:27.128 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.128 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.128 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:27.128 19:15:25 -- accel/accel.sh@21 -- # val= 00:06:27.128 19:15:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.128 19:15:25 -- accel/accel.sh@20 -- # IFS=: 00:06:27.128 19:15:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.506 19:15:26 -- accel/accel.sh@21 -- # val= 00:06:28.506 19:15:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.506 19:15:26 -- accel/accel.sh@21 -- # val= 00:06:28.506 19:15:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.506 19:15:26 -- accel/accel.sh@21 -- # val= 00:06:28.506 19:15:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.506 19:15:26 -- accel/accel.sh@21 -- # val= 00:06:28.506 19:15:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.506 19:15:26 -- accel/accel.sh@21 -- # val= 00:06:28.506 19:15:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.506 19:15:26 -- accel/accel.sh@21 -- # val= 00:06:28.506 19:15:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # IFS=: 00:06:28.506 19:15:26 -- accel/accel.sh@20 -- # read -r var val 00:06:28.506 19:15:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.506 19:15:26 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:28.506 19:15:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.506 00:06:28.506 real 0m2.803s 00:06:28.506 user 0m2.515s 00:06:28.506 sys 0m0.279s 00:06:28.506 19:15:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.506 19:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:28.506 ************************************ 00:06:28.506 END TEST accel_copy 00:06:28.506 ************************************ 00:06:28.506 19:15:26 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:28.506 19:15:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:28.506 19:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.506 19:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:28.506 ************************************ 00:06:28.506 START TEST accel_fill 00:06:28.506 ************************************ 00:06:28.506 19:15:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:28.506 19:15:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.506 19:15:26 -- accel/accel.sh@17 -- # local accel_module 00:06:28.506 19:15:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:28.506 19:15:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:28.506 19:15:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.507 19:15:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.507 19:15:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.507 19:15:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.507 19:15:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.507 19:15:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.507 19:15:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.507 19:15:26 -- accel/accel.sh@42 -- # jq -r . 00:06:28.507 [2024-11-17 19:15:26.465994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:28.507 [2024-11-17 19:15:26.466071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085398 ] 00:06:28.507 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.507 [2024-11-17 19:15:26.528876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.507 [2024-11-17 19:15:26.616325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.880 19:15:27 -- accel/accel.sh@18 -- # out=' 00:06:29.880 SPDK Configuration: 00:06:29.880 Core mask: 0x1 00:06:29.880 00:06:29.880 Accel Perf Configuration: 00:06:29.880 Workload Type: fill 00:06:29.880 Fill pattern: 0x80 00:06:29.880 Transfer size: 4096 bytes 00:06:29.880 Vector count 1 00:06:29.880 Module: software 00:06:29.880 Queue depth: 64 00:06:29.880 Allocate depth: 64 00:06:29.880 # threads/core: 1 00:06:29.880 Run time: 1 seconds 00:06:29.880 Verify: Yes 00:06:29.880 00:06:29.880 Running for 1 seconds... 00:06:29.880 00:06:29.880 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.880 ------------------------------------------------------------------------------------ 00:06:29.880 0,0 407232/s 1590 MiB/s 0 0 00:06:29.880 ==================================================================================== 00:06:29.880 Total 407232/s 1590 MiB/s 0 0' 00:06:29.880 19:15:27 -- accel/accel.sh@20 -- # IFS=: 00:06:29.880 19:15:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.880 19:15:27 -- accel/accel.sh@20 -- # read -r var val 00:06:29.880 19:15:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.880 19:15:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.880 19:15:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.880 19:15:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.880 19:15:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.880 19:15:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.880 19:15:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.880 19:15:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.880 19:15:27 -- accel/accel.sh@42 -- # jq -r . 00:06:29.880 [2024-11-17 19:15:27.862816] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:29.880 [2024-11-17 19:15:27.862894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085541 ] 00:06:29.880 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.880 [2024-11-17 19:15:27.924044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.880 [2024-11-17 19:15:28.014794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.880 19:15:28 -- accel/accel.sh@21 -- # val= 00:06:29.880 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.880 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.880 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.880 19:15:28 -- accel/accel.sh@21 -- # val= 00:06:29.880 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.880 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val=0x1 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val= 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val= 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val=fill 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val=0x80 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val= 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val=software 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val=64 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val=64 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val=1 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val=Yes 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val= 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:29.881 19:15:28 -- accel/accel.sh@21 -- # val= 00:06:29.881 19:15:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # IFS=: 00:06:29.881 19:15:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.254 19:15:29 -- accel/accel.sh@21 -- # val= 00:06:31.254 19:15:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.254 19:15:29 -- accel/accel.sh@21 -- # val= 00:06:31.254 19:15:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.254 19:15:29 -- accel/accel.sh@21 -- # val= 00:06:31.254 19:15:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.254 19:15:29 -- accel/accel.sh@21 -- # val= 00:06:31.254 19:15:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.254 19:15:29 -- accel/accel.sh@21 -- # val= 00:06:31.254 19:15:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.254 19:15:29 -- accel/accel.sh@21 -- # val= 00:06:31.254 19:15:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # IFS=: 00:06:31.254 19:15:29 -- accel/accel.sh@20 -- # read -r var val 00:06:31.254 19:15:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.254 19:15:29 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:31.254 19:15:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.254 00:06:31.254 real 0m2.795s 00:06:31.254 user 0m2.503s 00:06:31.254 sys 0m0.283s 00:06:31.254 19:15:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.254 19:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.254 ************************************ 00:06:31.254 END TEST accel_fill 00:06:31.254 ************************************ 00:06:31.254 19:15:29 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:31.254 19:15:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:31.254 19:15:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.254 19:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.254 ************************************ 00:06:31.254 START TEST accel_copy_crc32c 00:06:31.254 ************************************ 00:06:31.254 19:15:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:31.254 19:15:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.254 19:15:29 -- accel/accel.sh@17 -- # local accel_module 00:06:31.254 19:15:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:31.254 19:15:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:31.254 19:15:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.254 19:15:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.254 19:15:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.254 19:15:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.254 19:15:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.254 19:15:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.254 19:15:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.254 19:15:29 -- accel/accel.sh@42 -- # jq -r . 00:06:31.254 [2024-11-17 19:15:29.289474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:31.254 [2024-11-17 19:15:29.289551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085794 ] 00:06:31.254 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.254 [2024-11-17 19:15:29.352634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.254 [2024-11-17 19:15:29.445097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.628 19:15:30 -- accel/accel.sh@18 -- # out=' 00:06:32.628 SPDK Configuration: 00:06:32.628 Core mask: 0x1 00:06:32.628 00:06:32.628 Accel Perf Configuration: 00:06:32.628 Workload Type: copy_crc32c 00:06:32.628 CRC-32C seed: 0 00:06:32.628 Vector size: 4096 bytes 00:06:32.628 Transfer size: 4096 bytes 00:06:32.628 Vector count 1 00:06:32.628 Module: software 00:06:32.628 Queue depth: 32 00:06:32.628 Allocate depth: 32 00:06:32.628 # threads/core: 1 00:06:32.628 Run time: 1 seconds 00:06:32.628 Verify: Yes 00:06:32.628 00:06:32.628 Running for 1 seconds... 00:06:32.628 00:06:32.628 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.628 ------------------------------------------------------------------------------------ 00:06:32.628 0,0 220000/s 859 MiB/s 0 0 00:06:32.628 ==================================================================================== 00:06:32.628 Total 220000/s 859 MiB/s 0 0' 00:06:32.628 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.628 19:15:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:32.628 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.628 19:15:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:32.628 19:15:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.628 19:15:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.628 19:15:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.628 19:15:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.628 19:15:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.628 19:15:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.628 19:15:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.628 19:15:30 -- accel/accel.sh@42 -- # jq -r . 00:06:32.628 [2024-11-17 19:15:30.693509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:32.628 [2024-11-17 19:15:30.693578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085962 ] 00:06:32.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.628 [2024-11-17 19:15:30.755597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.628 [2024-11-17 19:15:30.846228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val= 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val= 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val=0x1 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val= 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val= 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val=0 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val= 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val=software 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val=32 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val=32 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val=1 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val=Yes 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val= 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:32.887 19:15:30 -- accel/accel.sh@21 -- # val= 00:06:32.887 19:15:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # IFS=: 00:06:32.887 19:15:30 -- accel/accel.sh@20 -- # read -r var val 00:06:33.821 19:15:32 -- accel/accel.sh@21 -- # val= 00:06:33.821 19:15:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.821 19:15:32 -- accel/accel.sh@20 -- # IFS=: 00:06:33.821 19:15:32 -- accel/accel.sh@20 -- # read -r var val 00:06:33.821 19:15:32 -- accel/accel.sh@21 -- # val= 00:06:33.821 19:15:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.821 19:15:32 -- accel/accel.sh@20 -- # IFS=: 00:06:33.821 19:15:32 -- accel/accel.sh@20 -- # read -r var val 00:06:33.821 19:15:32 -- accel/accel.sh@21 -- # val= 00:06:33.821 19:15:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.821 19:15:32 -- accel/accel.sh@20 -- # IFS=: 00:06:33.821 19:15:32 -- accel/accel.sh@20 -- # read -r var val 00:06:33.821 19:15:32 -- accel/accel.sh@21 -- # val= 00:06:33.821 19:15:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.821 19:15:32 -- accel/accel.sh@20 -- # IFS=: 00:06:33.821 19:15:32 -- accel/accel.sh@20 -- # read -r var val 00:06:33.822 19:15:32 -- accel/accel.sh@21 -- # val= 00:06:33.822 19:15:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.822 19:15:32 -- accel/accel.sh@20 -- # IFS=: 00:06:33.822 19:15:32 -- accel/accel.sh@20 -- # read -r var val 00:06:33.822 19:15:32 -- accel/accel.sh@21 -- # val= 00:06:33.822 19:15:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.822 19:15:32 -- accel/accel.sh@20 -- # IFS=: 00:06:33.822 19:15:32 -- accel/accel.sh@20 -- # read -r var val 00:06:33.822 19:15:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.822 19:15:32 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:33.822 19:15:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.822 00:06:33.822 real 0m2.814s 00:06:33.822 user 0m2.511s 00:06:33.822 sys 0m0.294s 00:06:33.822 19:15:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.822 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:33.822 ************************************ 00:06:33.822 END TEST accel_copy_crc32c 00:06:33.822 ************************************ 00:06:34.080 19:15:32 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:34.080 19:15:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:34.080 19:15:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.080 19:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:34.080 ************************************ 00:06:34.080 START TEST accel_copy_crc32c_C2 00:06:34.080 ************************************ 00:06:34.080 19:15:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:34.080 19:15:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.080 19:15:32 -- accel/accel.sh@17 -- # local accel_module 00:06:34.080 19:15:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:34.080 19:15:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:34.080 19:15:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.080 19:15:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.080 19:15:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.080 19:15:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.080 19:15:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.080 19:15:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.080 19:15:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.080 19:15:32 -- accel/accel.sh@42 -- # jq -r . 00:06:34.080 [2024-11-17 19:15:32.132537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:34.080 [2024-11-17 19:15:32.132628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086119 ] 00:06:34.080 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.080 [2024-11-17 19:15:32.195991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.080 [2024-11-17 19:15:32.284139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.454 19:15:33 -- accel/accel.sh@18 -- # out=' 00:06:35.454 SPDK Configuration: 00:06:35.454 Core mask: 0x1 00:06:35.454 00:06:35.454 Accel Perf Configuration: 00:06:35.454 Workload Type: copy_crc32c 00:06:35.454 CRC-32C seed: 0 00:06:35.454 Vector size: 4096 bytes 00:06:35.454 Transfer size: 8192 bytes 00:06:35.454 Vector count 2 00:06:35.454 Module: software 00:06:35.454 Queue depth: 32 00:06:35.454 Allocate depth: 32 00:06:35.454 # threads/core: 1 00:06:35.454 Run time: 1 seconds 00:06:35.454 Verify: Yes 00:06:35.454 00:06:35.454 Running for 1 seconds... 00:06:35.454 00:06:35.454 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.454 ------------------------------------------------------------------------------------ 00:06:35.454 0,0 155488/s 1214 MiB/s 0 0 00:06:35.454 ==================================================================================== 00:06:35.454 Total 155488/s 607 MiB/s 0 0' 00:06:35.454 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.454 19:15:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:35.454 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.454 19:15:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:35.454 19:15:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.454 19:15:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.454 19:15:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.454 19:15:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.454 19:15:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.454 19:15:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.454 19:15:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.454 19:15:33 -- accel/accel.sh@42 -- # jq -r . 00:06:35.454 [2024-11-17 19:15:33.524193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:35.454 [2024-11-17 19:15:33.524291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086267 ] 00:06:35.454 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.454 [2024-11-17 19:15:33.587658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.454 [2024-11-17 19:15:33.677129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val= 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val= 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val=0x1 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val= 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val= 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val=0 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val= 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val=software 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val=32 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val=32 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val=1 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val=Yes 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val= 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.713 19:15:33 -- accel/accel.sh@21 -- # val= 00:06:35.713 19:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # IFS=: 00:06:35.713 19:15:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.648 19:15:34 -- accel/accel.sh@21 -- # val= 00:06:36.648 19:15:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # IFS=: 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # read -r var val 00:06:36.648 19:15:34 -- accel/accel.sh@21 -- # val= 00:06:36.648 19:15:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # IFS=: 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # read -r var val 00:06:36.648 19:15:34 -- accel/accel.sh@21 -- # val= 00:06:36.648 19:15:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # IFS=: 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # read -r var val 00:06:36.648 19:15:34 -- accel/accel.sh@21 -- # val= 00:06:36.648 19:15:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # IFS=: 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # read -r var val 00:06:36.648 19:15:34 -- accel/accel.sh@21 -- # val= 00:06:36.648 19:15:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # IFS=: 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # read -r var val 00:06:36.648 19:15:34 -- accel/accel.sh@21 -- # val= 00:06:36.648 19:15:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # IFS=: 00:06:36.648 19:15:34 -- accel/accel.sh@20 -- # read -r var val 00:06:36.648 19:15:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.648 19:15:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:36.648 19:15:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.648 00:06:36.648 real 0m2.800s 00:06:36.648 user 0m2.493s 00:06:36.648 sys 0m0.298s 00:06:36.648 19:15:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.648 19:15:34 -- common/autotest_common.sh@10 -- # set +x 00:06:36.648 ************************************ 00:06:36.648 END TEST accel_copy_crc32c_C2 00:06:36.648 ************************************ 00:06:36.906 19:15:34 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:36.906 19:15:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:36.906 19:15:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.906 19:15:34 -- common/autotest_common.sh@10 -- # set +x 00:06:36.906 ************************************ 00:06:36.906 START TEST accel_dualcast 00:06:36.906 ************************************ 00:06:36.906 19:15:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:36.906 19:15:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.906 19:15:34 -- accel/accel.sh@17 -- # local accel_module 00:06:36.906 19:15:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:36.906 19:15:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:36.906 19:15:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.906 19:15:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.906 19:15:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.906 19:15:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.906 19:15:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.906 19:15:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.906 19:15:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.906 19:15:34 -- accel/accel.sh@42 -- # jq -r . 00:06:36.906 [2024-11-17 19:15:34.955615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:36.906 [2024-11-17 19:15:34.955748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086482 ] 00:06:36.906 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.907 [2024-11-17 19:15:35.020298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.907 [2024-11-17 19:15:35.112401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.279 19:15:36 -- accel/accel.sh@18 -- # out=' 00:06:38.279 SPDK Configuration: 00:06:38.279 Core mask: 0x1 00:06:38.279 00:06:38.279 Accel Perf Configuration: 00:06:38.279 Workload Type: dualcast 00:06:38.279 Transfer size: 4096 bytes 00:06:38.279 Vector count 1 00:06:38.279 Module: software 00:06:38.279 Queue depth: 32 00:06:38.279 Allocate depth: 32 00:06:38.279 # threads/core: 1 00:06:38.279 Run time: 1 seconds 00:06:38.279 Verify: Yes 00:06:38.279 00:06:38.279 Running for 1 seconds... 00:06:38.279 00:06:38.279 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.280 ------------------------------------------------------------------------------------ 00:06:38.280 0,0 297376/s 1161 MiB/s 0 0 00:06:38.280 ==================================================================================== 00:06:38.280 Total 297376/s 1161 MiB/s 0 0' 00:06:38.280 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.280 19:15:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:38.280 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.280 19:15:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:38.280 19:15:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.280 19:15:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.280 19:15:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.280 19:15:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.280 19:15:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.280 19:15:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.280 19:15:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.280 19:15:36 -- accel/accel.sh@42 -- # jq -r . 00:06:38.280 [2024-11-17 19:15:36.364556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:38.280 [2024-11-17 19:15:36.364633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086682 ] 00:06:38.280 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.280 [2024-11-17 19:15:36.425261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.280 [2024-11-17 19:15:36.514772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val= 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val= 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val=0x1 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val= 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val= 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val=dualcast 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val= 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val=software 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val=32 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val=32 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val=1 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val=Yes 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val= 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.538 19:15:36 -- accel/accel.sh@21 -- # val= 00:06:38.538 19:15:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # IFS=: 00:06:38.538 19:15:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.912 19:15:37 -- accel/accel.sh@21 -- # val= 00:06:39.912 19:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.912 19:15:37 -- accel/accel.sh@21 -- # val= 00:06:39.912 19:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.912 19:15:37 -- accel/accel.sh@21 -- # val= 00:06:39.912 19:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.912 19:15:37 -- accel/accel.sh@21 -- # val= 00:06:39.912 19:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.912 19:15:37 -- accel/accel.sh@21 -- # val= 00:06:39.912 19:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.912 19:15:37 -- accel/accel.sh@21 -- # val= 00:06:39.912 19:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.912 19:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.912 19:15:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.912 19:15:37 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:39.912 19:15:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.912 00:06:39.912 real 0m2.813s 00:06:39.912 user 0m2.530s 00:06:39.912 sys 0m0.273s 00:06:39.912 19:15:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.912 19:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.912 ************************************ 00:06:39.912 END TEST accel_dualcast 00:06:39.912 ************************************ 00:06:39.912 19:15:37 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:39.912 19:15:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:39.912 19:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.912 19:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.912 ************************************ 00:06:39.912 START TEST accel_compare 00:06:39.912 ************************************ 00:06:39.912 19:15:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:39.912 19:15:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.912 19:15:37 -- accel/accel.sh@17 -- # local accel_module 00:06:39.912 19:15:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:39.912 19:15:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:39.912 19:15:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.912 19:15:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.912 19:15:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.912 19:15:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.912 19:15:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.912 19:15:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.912 19:15:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.912 19:15:37 -- accel/accel.sh@42 -- # jq -r . 00:06:39.912 [2024-11-17 19:15:37.789853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:39.912 [2024-11-17 19:15:37.789924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086844 ] 00:06:39.912 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.912 [2024-11-17 19:15:37.849290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.912 [2024-11-17 19:15:37.940401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.287 19:15:39 -- accel/accel.sh@18 -- # out=' 00:06:41.287 SPDK Configuration: 00:06:41.287 Core mask: 0x1 00:06:41.287 00:06:41.287 Accel Perf Configuration: 00:06:41.287 Workload Type: compare 00:06:41.287 Transfer size: 4096 bytes 00:06:41.287 Vector count 1 00:06:41.287 Module: software 00:06:41.287 Queue depth: 32 00:06:41.287 Allocate depth: 32 00:06:41.287 # threads/core: 1 00:06:41.287 Run time: 1 seconds 00:06:41.287 Verify: Yes 00:06:41.287 00:06:41.287 Running for 1 seconds... 00:06:41.287 00:06:41.287 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.287 ------------------------------------------------------------------------------------ 00:06:41.287 0,0 404192/s 1578 MiB/s 0 0 00:06:41.287 ==================================================================================== 00:06:41.287 Total 404192/s 1578 MiB/s 0 0' 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:41.287 19:15:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.287 19:15:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.287 19:15:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.287 19:15:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.287 19:15:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.287 19:15:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.287 19:15:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.287 19:15:39 -- accel/accel.sh@42 -- # jq -r . 00:06:41.287 [2024-11-17 19:15:39.184450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:41.287 [2024-11-17 19:15:39.184514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086988 ] 00:06:41.287 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.287 [2024-11-17 19:15:39.244855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.287 [2024-11-17 19:15:39.335395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val= 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val= 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val=0x1 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val= 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val= 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val=compare 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val= 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val=software 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val=32 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val=32 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val=1 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val=Yes 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val= 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.287 19:15:39 -- accel/accel.sh@21 -- # val= 00:06:41.287 19:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:41.287 19:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.661 19:15:40 -- accel/accel.sh@21 -- # val= 00:06:42.661 19:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.661 19:15:40 -- accel/accel.sh@21 -- # val= 00:06:42.661 19:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.661 19:15:40 -- accel/accel.sh@21 -- # val= 00:06:42.661 19:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.661 19:15:40 -- accel/accel.sh@21 -- # val= 00:06:42.661 19:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.661 19:15:40 -- accel/accel.sh@21 -- # val= 00:06:42.661 19:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.661 19:15:40 -- accel/accel.sh@21 -- # val= 00:06:42.661 19:15:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # IFS=: 00:06:42.661 19:15:40 -- accel/accel.sh@20 -- # read -r var val 00:06:42.661 19:15:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.661 19:15:40 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:42.661 19:15:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.661 00:06:42.661 real 0m2.800s 00:06:42.661 user 0m2.515s 00:06:42.661 sys 0m0.275s 00:06:42.661 19:15:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.661 19:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 ************************************ 00:06:42.661 END TEST accel_compare 00:06:42.661 ************************************ 00:06:42.661 19:15:40 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:42.661 19:15:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.661 19:15:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.661 19:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 ************************************ 00:06:42.661 START TEST accel_xor 00:06:42.661 ************************************ 00:06:42.661 19:15:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:42.661 19:15:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.661 19:15:40 -- accel/accel.sh@17 -- # local accel_module 00:06:42.661 19:15:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:42.661 19:15:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:42.661 19:15:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.661 19:15:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.661 19:15:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.661 19:15:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.661 19:15:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.661 19:15:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.661 19:15:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.661 19:15:40 -- accel/accel.sh@42 -- # jq -r . 00:06:42.661 [2024-11-17 19:15:40.619034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:42.662 [2024-11-17 19:15:40.619110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087172 ] 00:06:42.662 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.662 [2024-11-17 19:15:40.680370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.662 [2024-11-17 19:15:40.770008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.036 19:15:42 -- accel/accel.sh@18 -- # out=' 00:06:44.036 SPDK Configuration: 00:06:44.036 Core mask: 0x1 00:06:44.036 00:06:44.036 Accel Perf Configuration: 00:06:44.036 Workload Type: xor 00:06:44.036 Source buffers: 2 00:06:44.036 Transfer size: 4096 bytes 00:06:44.036 Vector count 1 00:06:44.036 Module: software 00:06:44.036 Queue depth: 32 00:06:44.036 Allocate depth: 32 00:06:44.036 # threads/core: 1 00:06:44.036 Run time: 1 seconds 00:06:44.036 Verify: Yes 00:06:44.036 00:06:44.036 Running for 1 seconds... 00:06:44.036 00:06:44.036 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.036 ------------------------------------------------------------------------------------ 00:06:44.036 0,0 192768/s 753 MiB/s 0 0 00:06:44.036 ==================================================================================== 00:06:44.036 Total 192768/s 753 MiB/s 0 0' 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.036 19:15:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.036 19:15:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:44.036 19:15:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.036 19:15:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.036 19:15:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.036 19:15:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.036 19:15:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.036 19:15:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.036 19:15:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.036 19:15:42 -- accel/accel.sh@42 -- # jq -r . 00:06:44.036 [2024-11-17 19:15:42.024243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.036 [2024-11-17 19:15:42.024323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087407 ] 00:06:44.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.036 [2024-11-17 19:15:42.087819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.036 [2024-11-17 19:15:42.177232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.036 19:15:42 -- accel/accel.sh@21 -- # val= 00:06:44.036 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.036 19:15:42 -- accel/accel.sh@21 -- # val= 00:06:44.036 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.036 19:15:42 -- accel/accel.sh@21 -- # val=0x1 00:06:44.036 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.036 19:15:42 -- accel/accel.sh@21 -- # val= 00:06:44.036 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.036 19:15:42 -- accel/accel.sh@21 -- # val= 00:06:44.036 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.036 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.036 19:15:42 -- accel/accel.sh@21 -- # val=xor 00:06:44.036 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.036 19:15:42 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val=2 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val= 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val=software 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val=32 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val=32 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val=1 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val=Yes 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val= 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.037 19:15:42 -- accel/accel.sh@21 -- # val= 00:06:44.037 19:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.037 19:15:42 -- accel/accel.sh@20 -- # read -r var val 00:06:45.412 19:15:43 -- accel/accel.sh@21 -- # val= 00:06:45.412 19:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:45.412 19:15:43 -- accel/accel.sh@21 -- # val= 00:06:45.412 19:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:45.412 19:15:43 -- accel/accel.sh@21 -- # val= 00:06:45.412 19:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:45.412 19:15:43 -- accel/accel.sh@21 -- # val= 00:06:45.412 19:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:45.412 19:15:43 -- accel/accel.sh@21 -- # val= 00:06:45.412 19:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:45.412 19:15:43 -- accel/accel.sh@21 -- # val= 00:06:45.412 19:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:45.412 19:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:45.412 19:15:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.412 19:15:43 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:45.412 19:15:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.412 00:06:45.412 real 0m2.809s 00:06:45.412 user 0m2.503s 00:06:45.412 sys 0m0.296s 00:06:45.412 19:15:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.412 19:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:45.412 ************************************ 00:06:45.412 END TEST accel_xor 00:06:45.412 ************************************ 00:06:45.412 19:15:43 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:45.412 19:15:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:45.412 19:15:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.412 19:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:45.412 ************************************ 00:06:45.412 START TEST accel_xor 00:06:45.412 ************************************ 00:06:45.412 19:15:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:45.412 19:15:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.412 19:15:43 -- accel/accel.sh@17 -- # local accel_module 00:06:45.412 19:15:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:45.412 19:15:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:45.412 19:15:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.412 19:15:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.412 19:15:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.412 19:15:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.412 19:15:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.412 19:15:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.412 19:15:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.412 19:15:43 -- accel/accel.sh@42 -- # jq -r . 00:06:45.412 [2024-11-17 19:15:43.452871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:45.412 [2024-11-17 19:15:43.452949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087565 ] 00:06:45.412 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.412 [2024-11-17 19:15:43.514229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.412 [2024-11-17 19:15:43.605732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.787 19:15:44 -- accel/accel.sh@18 -- # out=' 00:06:46.787 SPDK Configuration: 00:06:46.787 Core mask: 0x1 00:06:46.787 00:06:46.787 Accel Perf Configuration: 00:06:46.787 Workload Type: xor 00:06:46.787 Source buffers: 3 00:06:46.787 Transfer size: 4096 bytes 00:06:46.787 Vector count 1 00:06:46.787 Module: software 00:06:46.787 Queue depth: 32 00:06:46.787 Allocate depth: 32 00:06:46.787 # threads/core: 1 00:06:46.787 Run time: 1 seconds 00:06:46.787 Verify: Yes 00:06:46.787 00:06:46.787 Running for 1 seconds... 00:06:46.787 00:06:46.787 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.787 ------------------------------------------------------------------------------------ 00:06:46.787 0,0 185216/s 723 MiB/s 0 0 00:06:46.787 ==================================================================================== 00:06:46.787 Total 185216/s 723 MiB/s 0 0' 00:06:46.787 19:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:46.787 19:15:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:46.787 19:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:46.787 19:15:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:46.787 19:15:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.787 19:15:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.787 19:15:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.787 19:15:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.787 19:15:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.787 19:15:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.787 19:15:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.787 19:15:44 -- accel/accel.sh@42 -- # jq -r . 00:06:46.787 [2024-11-17 19:15:44.853341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:46.787 [2024-11-17 19:15:44.853417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087715 ] 00:06:46.787 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.787 [2024-11-17 19:15:44.914052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.787 [2024-11-17 19:15:45.002466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val= 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val= 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val=0x1 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val= 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val= 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val=xor 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val=3 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val= 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val=software 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val=32 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.045 19:15:45 -- accel/accel.sh@21 -- # val=32 00:06:47.045 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.045 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.046 19:15:45 -- accel/accel.sh@21 -- # val=1 00:06:47.046 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.046 19:15:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.046 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.046 19:15:45 -- accel/accel.sh@21 -- # val=Yes 00:06:47.046 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.046 19:15:45 -- accel/accel.sh@21 -- # val= 00:06:47.046 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.046 19:15:45 -- accel/accel.sh@21 -- # val= 00:06:47.046 19:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.046 19:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.979 19:15:46 -- accel/accel.sh@21 -- # val= 00:06:47.979 19:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.979 19:15:46 -- accel/accel.sh@21 -- # val= 00:06:47.979 19:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.979 19:15:46 -- accel/accel.sh@21 -- # val= 00:06:47.979 19:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.979 19:15:46 -- accel/accel.sh@21 -- # val= 00:06:47.979 19:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.979 19:15:46 -- accel/accel.sh@21 -- # val= 00:06:47.979 19:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.979 19:15:46 -- accel/accel.sh@21 -- # val= 00:06:47.979 19:15:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.979 19:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.979 19:15:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.979 19:15:46 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:47.979 19:15:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.979 00:06:47.979 real 0m2.794s 00:06:47.979 user 0m2.491s 00:06:47.979 sys 0m0.295s 00:06:47.979 19:15:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.979 19:15:46 -- common/autotest_common.sh@10 -- # set +x 00:06:47.979 ************************************ 00:06:47.979 END TEST accel_xor 00:06:47.979 ************************************ 00:06:48.238 19:15:46 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:48.238 19:15:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:48.238 19:15:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.238 19:15:46 -- common/autotest_common.sh@10 -- # set +x 00:06:48.238 ************************************ 00:06:48.238 START TEST accel_dif_verify 00:06:48.238 ************************************ 00:06:48.238 19:15:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:48.238 19:15:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.238 19:15:46 -- accel/accel.sh@17 -- # local accel_module 00:06:48.238 19:15:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:48.238 19:15:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:48.238 19:15:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.238 19:15:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.238 19:15:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.238 19:15:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.238 19:15:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.238 19:15:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.238 19:15:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.238 19:15:46 -- accel/accel.sh@42 -- # jq -r . 00:06:48.238 [2024-11-17 19:15:46.274582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.238 [2024-11-17 19:15:46.274656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087877 ] 00:06:48.238 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.238 [2024-11-17 19:15:46.336059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.238 [2024-11-17 19:15:46.425887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.610 19:15:47 -- accel/accel.sh@18 -- # out=' 00:06:49.610 SPDK Configuration: 00:06:49.610 Core mask: 0x1 00:06:49.610 00:06:49.610 Accel Perf Configuration: 00:06:49.610 Workload Type: dif_verify 00:06:49.610 Vector size: 4096 bytes 00:06:49.610 Transfer size: 4096 bytes 00:06:49.610 Block size: 512 bytes 00:06:49.610 Metadata size: 8 bytes 00:06:49.610 Vector count 1 00:06:49.610 Module: software 00:06:49.610 Queue depth: 32 00:06:49.610 Allocate depth: 32 00:06:49.610 # threads/core: 1 00:06:49.610 Run time: 1 seconds 00:06:49.610 Verify: No 00:06:49.610 00:06:49.610 Running for 1 seconds... 00:06:49.610 00:06:49.610 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.610 ------------------------------------------------------------------------------------ 00:06:49.610 0,0 82496/s 327 MiB/s 0 0 00:06:49.610 ==================================================================================== 00:06:49.610 Total 82496/s 322 MiB/s 0 0' 00:06:49.610 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.610 19:15:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:49.610 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.610 19:15:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:49.610 19:15:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.610 19:15:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.610 19:15:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.610 19:15:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.610 19:15:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.610 19:15:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.610 19:15:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.610 19:15:47 -- accel/accel.sh@42 -- # jq -r . 00:06:49.610 [2024-11-17 19:15:47.675969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:49.610 [2024-11-17 19:15:47.676055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088134 ] 00:06:49.610 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.610 [2024-11-17 19:15:47.737132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.610 [2024-11-17 19:15:47.827213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val= 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val= 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val=0x1 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val= 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val= 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val=dif_verify 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val= 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val=software 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val=32 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val=32 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.868 19:15:47 -- accel/accel.sh@21 -- # val=1 00:06:49.868 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.868 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.869 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.869 19:15:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.869 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.869 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.869 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.869 19:15:47 -- accel/accel.sh@21 -- # val=No 00:06:49.869 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.869 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.869 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.869 19:15:47 -- accel/accel.sh@21 -- # val= 00:06:49.869 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.869 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.869 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.869 19:15:47 -- accel/accel.sh@21 -- # val= 00:06:49.869 19:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.869 19:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.869 19:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:50.806 19:15:49 -- accel/accel.sh@21 -- # val= 00:06:50.806 19:15:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.806 19:15:49 -- accel/accel.sh@21 -- # val= 00:06:50.806 19:15:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.806 19:15:49 -- accel/accel.sh@21 -- # val= 00:06:50.806 19:15:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.806 19:15:49 -- accel/accel.sh@21 -- # val= 00:06:50.806 19:15:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.806 19:15:49 -- accel/accel.sh@21 -- # val= 00:06:50.806 19:15:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.806 19:15:49 -- accel/accel.sh@21 -- # val= 00:06:50.806 19:15:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.806 19:15:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.806 19:15:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.806 19:15:49 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:50.806 19:15:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.806 00:06:50.806 real 0m2.799s 00:06:50.806 user 0m2.500s 00:06:50.806 sys 0m0.292s 00:06:50.806 19:15:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.806 19:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:50.806 ************************************ 00:06:50.806 END TEST accel_dif_verify 00:06:50.806 ************************************ 00:06:51.131 19:15:49 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:51.131 19:15:49 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:51.131 19:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.131 19:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:51.131 ************************************ 00:06:51.131 START TEST accel_dif_generate 00:06:51.131 ************************************ 00:06:51.131 19:15:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:51.131 19:15:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.131 19:15:49 -- accel/accel.sh@17 -- # local accel_module 00:06:51.131 19:15:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:51.131 19:15:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:51.131 19:15:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.131 19:15:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.131 19:15:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.131 19:15:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.131 19:15:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.131 19:15:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.131 19:15:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.131 19:15:49 -- accel/accel.sh@42 -- # jq -r . 00:06:51.131 [2024-11-17 19:15:49.097137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.131 [2024-11-17 19:15:49.097219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088292 ] 00:06:51.131 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.131 [2024-11-17 19:15:49.159776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.131 [2024-11-17 19:15:49.249487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.520 19:15:50 -- accel/accel.sh@18 -- # out=' 00:06:52.520 SPDK Configuration: 00:06:52.520 Core mask: 0x1 00:06:52.520 00:06:52.520 Accel Perf Configuration: 00:06:52.520 Workload Type: dif_generate 00:06:52.520 Vector size: 4096 bytes 00:06:52.520 Transfer size: 4096 bytes 00:06:52.520 Block size: 512 bytes 00:06:52.520 Metadata size: 8 bytes 00:06:52.520 Vector count 1 00:06:52.520 Module: software 00:06:52.520 Queue depth: 32 00:06:52.520 Allocate depth: 32 00:06:52.520 # threads/core: 1 00:06:52.520 Run time: 1 seconds 00:06:52.520 Verify: No 00:06:52.520 00:06:52.520 Running for 1 seconds... 00:06:52.520 00:06:52.520 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.520 ------------------------------------------------------------------------------------ 00:06:52.520 0,0 98848/s 392 MiB/s 0 0 00:06:52.520 ==================================================================================== 00:06:52.520 Total 98848/s 386 MiB/s 0 0' 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:52.520 19:15:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.520 19:15:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.520 19:15:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.520 19:15:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.520 19:15:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.520 19:15:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.520 19:15:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.520 19:15:50 -- accel/accel.sh@42 -- # jq -r . 00:06:52.520 [2024-11-17 19:15:50.506276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.520 [2024-11-17 19:15:50.506356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088436 ] 00:06:52.520 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.520 [2024-11-17 19:15:50.571391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.520 [2024-11-17 19:15:50.660731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val= 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val= 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val=0x1 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val= 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val= 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val=dif_generate 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val= 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val=software 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.520 19:15:50 -- accel/accel.sh@21 -- # val=32 00:06:52.520 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.520 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.521 19:15:50 -- accel/accel.sh@21 -- # val=32 00:06:52.521 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.521 19:15:50 -- accel/accel.sh@21 -- # val=1 00:06:52.521 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.521 19:15:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.521 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.521 19:15:50 -- accel/accel.sh@21 -- # val=No 00:06:52.521 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.521 19:15:50 -- accel/accel.sh@21 -- # val= 00:06:52.521 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:52.521 19:15:50 -- accel/accel.sh@21 -- # val= 00:06:52.521 19:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # IFS=: 00:06:52.521 19:15:50 -- accel/accel.sh@20 -- # read -r var val 00:06:53.894 19:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.894 19:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.894 19:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.894 19:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.894 19:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.894 19:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.894 19:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.894 19:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.894 19:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.894 19:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.894 19:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.895 19:15:51 -- accel/accel.sh@21 -- # val= 00:06:53.895 19:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.895 19:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.895 19:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.895 19:15:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.895 19:15:51 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:53.895 19:15:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.895 00:06:53.895 real 0m2.800s 00:06:53.895 user 0m2.512s 00:06:53.895 sys 0m0.280s 00:06:53.895 19:15:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.895 19:15:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.895 ************************************ 00:06:53.895 END TEST accel_dif_generate 00:06:53.895 ************************************ 00:06:53.895 19:15:51 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:53.895 19:15:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:53.895 19:15:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.895 19:15:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.895 ************************************ 00:06:53.895 START TEST accel_dif_generate_copy 00:06:53.895 ************************************ 00:06:53.895 19:15:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:53.895 19:15:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.895 19:15:51 -- accel/accel.sh@17 -- # local accel_module 00:06:53.895 19:15:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:53.895 19:15:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:53.895 19:15:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.895 19:15:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.895 19:15:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.895 19:15:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.895 19:15:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.895 19:15:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.895 19:15:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.895 19:15:51 -- accel/accel.sh@42 -- # jq -r . 00:06:53.895 [2024-11-17 19:15:51.923754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:53.895 [2024-11-17 19:15:51.923832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088600 ] 00:06:53.895 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.895 [2024-11-17 19:15:51.985535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.895 [2024-11-17 19:15:52.079010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.269 19:15:53 -- accel/accel.sh@18 -- # out=' 00:06:55.269 SPDK Configuration: 00:06:55.269 Core mask: 0x1 00:06:55.269 00:06:55.269 Accel Perf Configuration: 00:06:55.269 Workload Type: dif_generate_copy 00:06:55.269 Vector size: 4096 bytes 00:06:55.269 Transfer size: 4096 bytes 00:06:55.269 Vector count 1 00:06:55.269 Module: software 00:06:55.269 Queue depth: 32 00:06:55.269 Allocate depth: 32 00:06:55.269 # threads/core: 1 00:06:55.269 Run time: 1 seconds 00:06:55.269 Verify: No 00:06:55.269 00:06:55.269 Running for 1 seconds... 00:06:55.269 00:06:55.269 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.269 ------------------------------------------------------------------------------------ 00:06:55.269 0,0 76128/s 302 MiB/s 0 0 00:06:55.269 ==================================================================================== 00:06:55.269 Total 76128/s 297 MiB/s 0 0' 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.269 19:15:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.269 19:15:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:55.269 19:15:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.269 19:15:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.269 19:15:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.269 19:15:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.269 19:15:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.269 19:15:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.269 19:15:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.269 19:15:53 -- accel/accel.sh@42 -- # jq -r . 00:06:55.269 [2024-11-17 19:15:53.322267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.269 [2024-11-17 19:15:53.322345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088859 ] 00:06:55.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.269 [2024-11-17 19:15:53.383214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.269 [2024-11-17 19:15:53.473230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.269 19:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.269 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.269 19:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.269 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.269 19:15:53 -- accel/accel.sh@21 -- # val=0x1 00:06:55.269 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.269 19:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.269 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.269 19:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.269 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.269 19:15:53 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:55.269 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.269 19:15:53 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.269 19:15:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.269 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.269 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.269 19:15:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.527 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.527 19:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.527 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.527 19:15:53 -- accel/accel.sh@21 -- # val=software 00:06:55.527 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.527 19:15:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.527 19:15:53 -- accel/accel.sh@21 -- # val=32 00:06:55.527 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.527 19:15:53 -- accel/accel.sh@21 -- # val=32 00:06:55.527 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.527 19:15:53 -- accel/accel.sh@21 -- # val=1 00:06:55.527 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.527 19:15:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.527 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.527 19:15:53 -- accel/accel.sh@21 -- # val=No 00:06:55.527 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.527 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.528 19:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.528 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.528 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.528 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.528 19:15:53 -- accel/accel.sh@21 -- # val= 00:06:55.528 19:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.528 19:15:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.528 19:15:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.462 19:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.462 19:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.462 19:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.462 19:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.462 19:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.462 19:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.462 19:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.462 19:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.462 19:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.462 19:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.462 19:15:54 -- accel/accel.sh@21 -- # val= 00:06:56.462 19:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.462 19:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.462 19:15:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.462 19:15:54 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:56.462 19:15:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.462 00:06:56.462 real 0m2.801s 00:06:56.462 user 0m2.502s 00:06:56.462 sys 0m0.290s 00:06:56.462 19:15:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.462 19:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.462 ************************************ 00:06:56.462 END TEST accel_dif_generate_copy 00:06:56.462 ************************************ 00:06:56.462 19:15:54 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:56.462 19:15:54 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.462 19:15:54 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:56.722 19:15:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.722 19:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.722 ************************************ 00:06:56.722 START TEST accel_comp 00:06:56.722 ************************************ 00:06:56.722 19:15:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.722 19:15:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.722 19:15:54 -- accel/accel.sh@17 -- # local accel_module 00:06:56.722 19:15:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.722 19:15:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.722 19:15:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.722 19:15:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.722 19:15:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.722 19:15:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.722 19:15:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.722 19:15:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.722 19:15:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.722 19:15:54 -- accel/accel.sh@42 -- # jq -r . 00:06:56.722 [2024-11-17 19:15:54.749819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:56.722 [2024-11-17 19:15:54.749891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089022 ] 00:06:56.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.722 [2024-11-17 19:15:54.812650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.722 [2024-11-17 19:15:54.903696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.096 19:15:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:58.096 00:06:58.096 SPDK Configuration: 00:06:58.096 Core mask: 0x1 00:06:58.096 00:06:58.096 Accel Perf Configuration: 00:06:58.096 Workload Type: compress 00:06:58.096 Transfer size: 4096 bytes 00:06:58.096 Vector count 1 00:06:58.096 Module: software 00:06:58.096 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.096 Queue depth: 32 00:06:58.096 Allocate depth: 32 00:06:58.096 # threads/core: 1 00:06:58.097 Run time: 1 seconds 00:06:58.097 Verify: No 00:06:58.097 00:06:58.097 Running for 1 seconds... 00:06:58.097 00:06:58.097 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.097 ------------------------------------------------------------------------------------ 00:06:58.097 0,0 31616/s 131 MiB/s 0 0 00:06:58.097 ==================================================================================== 00:06:58.097 Total 31616/s 123 MiB/s 0 0' 00:06:58.097 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.097 19:15:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.097 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.097 19:15:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.097 19:15:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.097 19:15:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.097 19:15:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.097 19:15:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.097 19:15:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.097 19:15:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.097 19:15:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.097 19:15:56 -- accel/accel.sh@42 -- # jq -r . 00:06:58.097 [2024-11-17 19:15:56.162745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:58.097 [2024-11-17 19:15:56.162821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089163 ] 00:06:58.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.097 [2024-11-17 19:15:56.223916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.097 [2024-11-17 19:15:56.314350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val= 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val= 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val= 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val=0x1 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val= 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val= 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val=compress 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val= 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val=software 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val=32 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val=32 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.355 19:15:56 -- accel/accel.sh@21 -- # val=1 00:06:58.355 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.355 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.356 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.356 19:15:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.356 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.356 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.356 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.356 19:15:56 -- accel/accel.sh@21 -- # val=No 00:06:58.356 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.356 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.356 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.356 19:15:56 -- accel/accel.sh@21 -- # val= 00:06:58.356 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.356 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.356 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.356 19:15:56 -- accel/accel.sh@21 -- # val= 00:06:58.356 19:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.356 19:15:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.356 19:15:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.289 19:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.289 19:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.289 19:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.289 19:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.289 19:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.289 19:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.289 19:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.289 19:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.289 19:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.289 19:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.289 19:15:57 -- accel/accel.sh@21 -- # val= 00:06:59.289 19:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.289 19:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.289 19:15:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.289 19:15:57 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:59.289 19:15:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.289 00:06:59.289 real 0m2.821s 00:06:59.289 user 0m2.512s 00:06:59.289 sys 0m0.301s 00:06:59.289 19:15:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.289 19:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:59.289 ************************************ 00:06:59.289 END TEST accel_comp 00:06:59.289 ************************************ 00:06:59.547 19:15:57 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.547 19:15:57 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:59.547 19:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.547 19:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:59.547 ************************************ 00:06:59.547 START TEST accel_decomp 00:06:59.547 ************************************ 00:06:59.547 19:15:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.547 19:15:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.547 19:15:57 -- accel/accel.sh@17 -- # local accel_module 00:06:59.547 19:15:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.547 19:15:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.547 19:15:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.547 19:15:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.547 19:15:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.547 19:15:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.547 19:15:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.547 19:15:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.547 19:15:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.547 19:15:57 -- accel/accel.sh@42 -- # jq -r . 00:06:59.547 [2024-11-17 19:15:57.593157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:59.547 [2024-11-17 19:15:57.593219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089320 ] 00:06:59.547 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.547 [2024-11-17 19:15:57.653108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.547 [2024-11-17 19:15:57.748750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.919 19:15:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:00.919 00:07:00.919 SPDK Configuration: 00:07:00.919 Core mask: 0x1 00:07:00.919 00:07:00.919 Accel Perf Configuration: 00:07:00.919 Workload Type: decompress 00:07:00.919 Transfer size: 4096 bytes 00:07:00.919 Vector count 1 00:07:00.919 Module: software 00:07:00.919 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.919 Queue depth: 32 00:07:00.919 Allocate depth: 32 00:07:00.919 # threads/core: 1 00:07:00.919 Run time: 1 seconds 00:07:00.919 Verify: Yes 00:07:00.919 00:07:00.919 Running for 1 seconds... 00:07:00.919 00:07:00.919 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.919 ------------------------------------------------------------------------------------ 00:07:00.919 0,0 55360/s 102 MiB/s 0 0 00:07:00.919 ==================================================================================== 00:07:00.920 Total 55360/s 216 MiB/s 0 0' 00:07:00.920 19:15:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.920 19:15:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.920 19:15:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.920 19:15:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.920 19:15:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.920 19:15:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.920 19:15:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.920 19:15:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.920 19:15:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.920 19:15:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.920 19:15:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.920 19:15:58 -- accel/accel.sh@42 -- # jq -r . 00:07:00.920 [2024-11-17 19:15:58.999057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.920 [2024-11-17 19:15:58.999123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089584 ] 00:07:00.920 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.920 [2024-11-17 19:15:59.060469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.920 [2024-11-17 19:15:59.151881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val=0x1 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val=decompress 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val=software 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val=32 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val=32 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val=1 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val=Yes 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.177 19:15:59 -- accel/accel.sh@21 -- # val= 00:07:01.177 19:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.177 19:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:02.551 19:16:00 -- accel/accel.sh@21 -- # val= 00:07:02.551 19:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:02.551 19:16:00 -- accel/accel.sh@21 -- # val= 00:07:02.551 19:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:02.551 19:16:00 -- accel/accel.sh@21 -- # val= 00:07:02.551 19:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:02.551 19:16:00 -- accel/accel.sh@21 -- # val= 00:07:02.551 19:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:02.551 19:16:00 -- accel/accel.sh@21 -- # val= 00:07:02.551 19:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:02.551 19:16:00 -- accel/accel.sh@21 -- # val= 00:07:02.551 19:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:02.551 19:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:02.551 19:16:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.551 19:16:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:02.551 19:16:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.551 00:07:02.551 real 0m2.810s 00:07:02.551 user 0m2.517s 00:07:02.551 sys 0m0.286s 00:07:02.551 19:16:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.551 19:16:00 -- common/autotest_common.sh@10 -- # set +x 00:07:02.551 ************************************ 00:07:02.551 END TEST accel_decomp 00:07:02.551 ************************************ 00:07:02.551 19:16:00 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:02.551 19:16:00 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:02.551 19:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.551 19:16:00 -- common/autotest_common.sh@10 -- # set +x 00:07:02.551 ************************************ 00:07:02.551 START TEST accel_decmop_full 00:07:02.551 ************************************ 00:07:02.551 19:16:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:02.551 19:16:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.551 19:16:00 -- accel/accel.sh@17 -- # local accel_module 00:07:02.551 19:16:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:02.551 19:16:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:02.551 19:16:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.551 19:16:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.551 19:16:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.551 19:16:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.551 19:16:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.551 19:16:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.551 19:16:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.551 19:16:00 -- accel/accel.sh@42 -- # jq -r . 00:07:02.551 [2024-11-17 19:16:00.432509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:02.551 [2024-11-17 19:16:00.432585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089741 ] 00:07:02.551 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.551 [2024-11-17 19:16:00.493904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.551 [2024-11-17 19:16:00.585033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.926 19:16:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:03.926 00:07:03.926 SPDK Configuration: 00:07:03.926 Core mask: 0x1 00:07:03.926 00:07:03.926 Accel Perf Configuration: 00:07:03.926 Workload Type: decompress 00:07:03.926 Transfer size: 111250 bytes 00:07:03.926 Vector count 1 00:07:03.926 Module: software 00:07:03.926 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.926 Queue depth: 32 00:07:03.926 Allocate depth: 32 00:07:03.926 # threads/core: 1 00:07:03.926 Run time: 1 seconds 00:07:03.926 Verify: Yes 00:07:03.926 00:07:03.926 Running for 1 seconds... 00:07:03.926 00:07:03.926 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.926 ------------------------------------------------------------------------------------ 00:07:03.926 0,0 3808/s 157 MiB/s 0 0 00:07:03.926 ==================================================================================== 00:07:03.926 Total 3808/s 404 MiB/s 0 0' 00:07:03.926 19:16:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.926 19:16:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.926 19:16:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.926 19:16:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.926 19:16:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.926 19:16:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.926 19:16:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.926 19:16:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.926 19:16:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.926 19:16:01 -- accel/accel.sh@42 -- # jq -r . 00:07:03.926 [2024-11-17 19:16:01.851936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.926 [2024-11-17 19:16:01.852029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089888 ] 00:07:03.926 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.926 [2024-11-17 19:16:01.913263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.926 [2024-11-17 19:16:02.003332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val= 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val= 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val= 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val=0x1 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val= 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val= 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val=decompress 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val= 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val=software 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val=32 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val=32 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val=1 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val=Yes 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val= 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.926 19:16:02 -- accel/accel.sh@21 -- # val= 00:07:03.926 19:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.926 19:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:05.300 19:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.300 19:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.300 19:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.300 19:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.300 19:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.300 19:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.300 19:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.300 19:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.300 19:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.300 19:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.300 19:16:03 -- accel/accel.sh@21 -- # val= 00:07:05.300 19:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.300 19:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.300 19:16:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.300 19:16:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:05.300 19:16:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.300 00:07:05.300 real 0m2.844s 00:07:05.300 user 0m2.548s 00:07:05.300 sys 0m0.289s 00:07:05.300 19:16:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.300 19:16:03 -- common/autotest_common.sh@10 -- # set +x 00:07:05.300 ************************************ 00:07:05.300 END TEST accel_decmop_full 00:07:05.300 ************************************ 00:07:05.300 19:16:03 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.300 19:16:03 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:05.300 19:16:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.300 19:16:03 -- common/autotest_common.sh@10 -- # set +x 00:07:05.300 ************************************ 00:07:05.300 START TEST accel_decomp_mcore 00:07:05.300 ************************************ 00:07:05.300 19:16:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.300 19:16:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.300 19:16:03 -- accel/accel.sh@17 -- # local accel_module 00:07:05.300 19:16:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.300 19:16:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.300 19:16:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.300 19:16:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.300 19:16:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.300 19:16:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.300 19:16:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.300 19:16:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.300 19:16:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.300 19:16:03 -- accel/accel.sh@42 -- # jq -r . 00:07:05.300 [2024-11-17 19:16:03.300459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.300 [2024-11-17 19:16:03.300537] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090078 ] 00:07:05.300 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.300 [2024-11-17 19:16:03.362816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.300 [2024-11-17 19:16:03.455564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.300 [2024-11-17 19:16:03.455642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.300 [2024-11-17 19:16:03.455738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.300 [2024-11-17 19:16:03.455735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.673 19:16:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:06.673 00:07:06.673 SPDK Configuration: 00:07:06.673 Core mask: 0xf 00:07:06.673 00:07:06.673 Accel Perf Configuration: 00:07:06.673 Workload Type: decompress 00:07:06.673 Transfer size: 4096 bytes 00:07:06.673 Vector count 1 00:07:06.673 Module: software 00:07:06.673 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.673 Queue depth: 32 00:07:06.673 Allocate depth: 32 00:07:06.673 # threads/core: 1 00:07:06.673 Run time: 1 seconds 00:07:06.673 Verify: Yes 00:07:06.673 00:07:06.673 Running for 1 seconds... 00:07:06.673 00:07:06.673 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.673 ------------------------------------------------------------------------------------ 00:07:06.673 0,0 56512/s 104 MiB/s 0 0 00:07:06.673 3,0 57056/s 105 MiB/s 0 0 00:07:06.673 2,0 57024/s 105 MiB/s 0 0 00:07:06.673 1,0 56800/s 104 MiB/s 0 0 00:07:06.673 ==================================================================================== 00:07:06.673 Total 227392/s 888 MiB/s 0 0' 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.673 19:16:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.673 19:16:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.673 19:16:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.673 19:16:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.673 19:16:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.673 19:16:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.673 19:16:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.673 19:16:04 -- accel/accel.sh@42 -- # jq -r . 00:07:06.673 [2024-11-17 19:16:04.701812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:06.673 [2024-11-17 19:16:04.701888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090308 ] 00:07:06.673 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.673 [2024-11-17 19:16:04.763129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.673 [2024-11-17 19:16:04.856303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.673 [2024-11-17 19:16:04.856375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.673 [2024-11-17 19:16:04.856466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.673 [2024-11-17 19:16:04.856470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val= 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val= 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val= 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val=0xf 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val= 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val= 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val=decompress 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val= 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val=software 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val=32 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val=32 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val=1 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val=Yes 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val= 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.673 19:16:04 -- accel/accel.sh@21 -- # val= 00:07:06.673 19:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.673 19:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.047 19:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.047 19:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.047 19:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.047 19:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.047 19:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.047 19:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.047 19:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.047 19:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@21 -- # val= 00:07:08.047 19:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.047 19:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.047 19:16:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.047 19:16:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:08.047 19:16:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.047 00:07:08.047 real 0m2.817s 00:07:08.047 user 0m9.377s 00:07:08.047 sys 0m0.303s 00:07:08.047 19:16:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.047 19:16:06 -- common/autotest_common.sh@10 -- # set +x 00:07:08.047 ************************************ 00:07:08.047 END TEST accel_decomp_mcore 00:07:08.047 ************************************ 00:07:08.047 19:16:06 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.047 19:16:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:08.047 19:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.047 19:16:06 -- common/autotest_common.sh@10 -- # set +x 00:07:08.047 ************************************ 00:07:08.047 START TEST accel_decomp_full_mcore 00:07:08.047 ************************************ 00:07:08.047 19:16:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.047 19:16:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.047 19:16:06 -- accel/accel.sh@17 -- # local accel_module 00:07:08.047 19:16:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.047 19:16:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.047 19:16:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.047 19:16:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.047 19:16:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.047 19:16:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.047 19:16:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.047 19:16:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.047 19:16:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.047 19:16:06 -- accel/accel.sh@42 -- # jq -r . 00:07:08.047 [2024-11-17 19:16:06.143171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.047 [2024-11-17 19:16:06.143250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090472 ] 00:07:08.047 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.047 [2024-11-17 19:16:06.204753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.047 [2024-11-17 19:16:06.297701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.047 [2024-11-17 19:16:06.297755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.047 [2024-11-17 19:16:06.297843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.047 [2024-11-17 19:16:06.297846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.418 19:16:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:09.418 00:07:09.418 SPDK Configuration: 00:07:09.418 Core mask: 0xf 00:07:09.418 00:07:09.418 Accel Perf Configuration: 00:07:09.418 Workload Type: decompress 00:07:09.418 Transfer size: 111250 bytes 00:07:09.418 Vector count 1 00:07:09.418 Module: software 00:07:09.418 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.418 Queue depth: 32 00:07:09.418 Allocate depth: 32 00:07:09.418 # threads/core: 1 00:07:09.418 Run time: 1 seconds 00:07:09.418 Verify: Yes 00:07:09.418 00:07:09.418 Running for 1 seconds... 00:07:09.418 00:07:09.418 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.418 ------------------------------------------------------------------------------------ 00:07:09.418 0,0 4256/s 175 MiB/s 0 0 00:07:09.418 3,0 4256/s 175 MiB/s 0 0 00:07:09.418 2,0 4256/s 175 MiB/s 0 0 00:07:09.418 1,0 4288/s 177 MiB/s 0 0 00:07:09.418 ==================================================================================== 00:07:09.418 Total 17056/s 1809 MiB/s 0 0' 00:07:09.418 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.418 19:16:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.418 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.418 19:16:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:09.418 19:16:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.418 19:16:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.418 19:16:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.418 19:16:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.418 19:16:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.418 19:16:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.418 19:16:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.418 19:16:07 -- accel/accel.sh@42 -- # jq -r . 00:07:09.418 [2024-11-17 19:16:07.564469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.418 [2024-11-17 19:16:07.564548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090618 ] 00:07:09.418 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.418 [2024-11-17 19:16:07.624880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.677 [2024-11-17 19:16:07.719283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.677 [2024-11-17 19:16:07.719350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.677 [2024-11-17 19:16:07.719441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.677 [2024-11-17 19:16:07.719443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val=0xf 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val=decompress 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val=software 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val=32 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val=32 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val=1 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val=Yes 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.677 19:16:07 -- accel/accel.sh@21 -- # val= 00:07:09.677 19:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.677 19:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@21 -- # val= 00:07:11.051 19:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@21 -- # val= 00:07:11.051 19:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@21 -- # val= 00:07:11.051 19:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@21 -- # val= 00:07:11.051 19:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@21 -- # val= 00:07:11.051 19:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@21 -- # val= 00:07:11.051 19:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@21 -- # val= 00:07:11.051 19:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@21 -- # val= 00:07:11.051 19:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@21 -- # val= 00:07:11.051 19:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.051 19:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.051 19:16:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.051 19:16:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:11.051 19:16:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.051 00:07:11.051 real 0m2.842s 00:07:11.051 user 0m9.492s 00:07:11.051 sys 0m0.306s 00:07:11.051 19:16:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.051 19:16:08 -- common/autotest_common.sh@10 -- # set +x 00:07:11.051 ************************************ 00:07:11.051 END TEST accel_decomp_full_mcore 00:07:11.051 ************************************ 00:07:11.051 19:16:08 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.051 19:16:08 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:11.051 19:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.051 19:16:08 -- common/autotest_common.sh@10 -- # set +x 00:07:11.051 ************************************ 00:07:11.051 START TEST accel_decomp_mthread 00:07:11.051 ************************************ 00:07:11.051 19:16:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.051 19:16:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.051 19:16:08 -- accel/accel.sh@17 -- # local accel_module 00:07:11.051 19:16:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.051 19:16:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.051 19:16:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.051 19:16:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.051 19:16:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.051 19:16:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.051 19:16:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.051 19:16:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.051 19:16:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.051 19:16:08 -- accel/accel.sh@42 -- # jq -r . 00:07:11.051 [2024-11-17 19:16:09.010236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.051 [2024-11-17 19:16:09.010313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090856 ] 00:07:11.051 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.051 [2024-11-17 19:16:09.072840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.051 [2024-11-17 19:16:09.160495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.425 19:16:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.425 00:07:12.425 SPDK Configuration: 00:07:12.425 Core mask: 0x1 00:07:12.425 00:07:12.425 Accel Perf Configuration: 00:07:12.425 Workload Type: decompress 00:07:12.425 Transfer size: 4096 bytes 00:07:12.425 Vector count 1 00:07:12.425 Module: software 00:07:12.425 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.425 Queue depth: 32 00:07:12.425 Allocate depth: 32 00:07:12.425 # threads/core: 2 00:07:12.425 Run time: 1 seconds 00:07:12.425 Verify: Yes 00:07:12.425 00:07:12.425 Running for 1 seconds... 00:07:12.425 00:07:12.425 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.425 ------------------------------------------------------------------------------------ 00:07:12.425 0,1 28032/s 51 MiB/s 0 0 00:07:12.425 0,0 27904/s 51 MiB/s 0 0 00:07:12.425 ==================================================================================== 00:07:12.425 Total 55936/s 218 MiB/s 0 0' 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:12.425 19:16:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.425 19:16:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.425 19:16:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.425 19:16:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.425 19:16:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.425 19:16:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.425 19:16:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.425 19:16:10 -- accel/accel.sh@42 -- # jq -r . 00:07:12.425 [2024-11-17 19:16:10.396215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.425 [2024-11-17 19:16:10.396296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091043 ] 00:07:12.425 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.425 [2024-11-17 19:16:10.457152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.425 [2024-11-17 19:16:10.548199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val=0x1 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val=decompress 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.425 19:16:10 -- accel/accel.sh@21 -- # val=software 00:07:12.425 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.425 19:16:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.425 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.426 19:16:10 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.426 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.426 19:16:10 -- accel/accel.sh@21 -- # val=32 00:07:12.426 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.426 19:16:10 -- accel/accel.sh@21 -- # val=32 00:07:12.426 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.426 19:16:10 -- accel/accel.sh@21 -- # val=2 00:07:12.426 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.426 19:16:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.426 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.426 19:16:10 -- accel/accel.sh@21 -- # val=Yes 00:07:12.426 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.426 19:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.426 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.426 19:16:10 -- accel/accel.sh@21 -- # val= 00:07:12.426 19:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.426 19:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.799 19:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.799 19:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.799 19:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.799 19:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.799 19:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.799 19:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.799 19:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.799 19:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.799 19:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.799 19:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.799 19:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.799 19:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.799 19:16:11 -- accel/accel.sh@21 -- # val= 00:07:13.799 19:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.799 19:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.799 19:16:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.799 19:16:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:13.799 19:16:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.799 00:07:13.799 real 0m2.803s 00:07:13.799 user 0m2.513s 00:07:13.799 sys 0m0.282s 00:07:13.799 19:16:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.799 19:16:11 -- common/autotest_common.sh@10 -- # set +x 00:07:13.799 ************************************ 00:07:13.799 END TEST accel_decomp_mthread 00:07:13.799 ************************************ 00:07:13.799 19:16:11 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.799 19:16:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:13.799 19:16:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.799 19:16:11 -- common/autotest_common.sh@10 -- # set +x 00:07:13.799 ************************************ 00:07:13.799 START TEST accel_deomp_full_mthread 00:07:13.799 ************************************ 00:07:13.799 19:16:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.799 19:16:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.799 19:16:11 -- accel/accel.sh@17 -- # local accel_module 00:07:13.799 19:16:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.799 19:16:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.799 19:16:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.799 19:16:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.799 19:16:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.799 19:16:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.799 19:16:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.799 19:16:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.799 19:16:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.799 19:16:11 -- accel/accel.sh@42 -- # jq -r . 00:07:13.799 [2024-11-17 19:16:11.838548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.799 [2024-11-17 19:16:11.838624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091199 ] 00:07:13.799 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.799 [2024-11-17 19:16:11.900841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.799 [2024-11-17 19:16:11.990507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.174 19:16:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:15.174 00:07:15.174 SPDK Configuration: 00:07:15.174 Core mask: 0x1 00:07:15.174 00:07:15.174 Accel Perf Configuration: 00:07:15.174 Workload Type: decompress 00:07:15.174 Transfer size: 111250 bytes 00:07:15.174 Vector count 1 00:07:15.174 Module: software 00:07:15.174 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.174 Queue depth: 32 00:07:15.174 Allocate depth: 32 00:07:15.174 # threads/core: 2 00:07:15.174 Run time: 1 seconds 00:07:15.174 Verify: Yes 00:07:15.174 00:07:15.174 Running for 1 seconds... 00:07:15.174 00:07:15.174 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.174 ------------------------------------------------------------------------------------ 00:07:15.174 0,1 1952/s 80 MiB/s 0 0 00:07:15.174 0,0 1920/s 79 MiB/s 0 0 00:07:15.174 ==================================================================================== 00:07:15.174 Total 3872/s 410 MiB/s 0 0' 00:07:15.174 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.174 19:16:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.174 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.174 19:16:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.174 19:16:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.174 19:16:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.174 19:16:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.174 19:16:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.174 19:16:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.174 19:16:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.174 19:16:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.174 19:16:13 -- accel/accel.sh@42 -- # jq -r . 00:07:15.174 [2024-11-17 19:16:13.275303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.174 [2024-11-17 19:16:13.275382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091348 ] 00:07:15.174 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.174 [2024-11-17 19:16:13.336889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.174 [2024-11-17 19:16:13.426973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val=0x1 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val=decompress 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val=software 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val=32 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val=32 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val=2 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val=Yes 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.434 19:16:13 -- accel/accel.sh@21 -- # val= 00:07:15.434 19:16:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.434 19:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.808 19:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.808 19:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.808 19:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.808 19:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.808 19:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.808 19:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.808 19:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.808 19:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.808 19:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.808 19:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.808 19:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.808 19:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.808 19:16:14 -- accel/accel.sh@21 -- # val= 00:07:16.808 19:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.808 19:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.808 19:16:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.808 19:16:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:16.808 19:16:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.808 00:07:16.808 real 0m2.879s 00:07:16.808 user 0m2.583s 00:07:16.808 sys 0m0.289s 00:07:16.808 19:16:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.808 19:16:14 -- common/autotest_common.sh@10 -- # set +x 00:07:16.808 ************************************ 00:07:16.808 END TEST accel_deomp_full_mthread 00:07:16.808 ************************************ 00:07:16.808 19:16:14 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:16.808 19:16:14 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:16.808 19:16:14 -- accel/accel.sh@129 -- # build_accel_config 00:07:16.808 19:16:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:16.808 19:16:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.808 19:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.808 19:16:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.808 19:16:14 -- common/autotest_common.sh@10 -- # set +x 00:07:16.808 19:16:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.808 19:16:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.808 19:16:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.808 19:16:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.808 19:16:14 -- accel/accel.sh@42 -- # jq -r . 00:07:16.808 ************************************ 00:07:16.808 START TEST accel_dif_functional_tests 00:07:16.808 ************************************ 00:07:16.808 19:16:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:16.808 [2024-11-17 19:16:14.764430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.808 [2024-11-17 19:16:14.764505] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091592 ] 00:07:16.808 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.808 [2024-11-17 19:16:14.825823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.808 [2024-11-17 19:16:14.916993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.808 [2024-11-17 19:16:14.917048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.808 [2024-11-17 19:16:14.917051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.808 00:07:16.808 00:07:16.808 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.808 http://cunit.sourceforge.net/ 00:07:16.808 00:07:16.808 00:07:16.808 Suite: accel_dif 00:07:16.808 Test: verify: DIF generated, GUARD check ...passed 00:07:16.808 Test: verify: DIF generated, APPTAG check ...passed 00:07:16.808 Test: verify: DIF generated, REFTAG check ...passed 00:07:16.808 Test: verify: DIF not generated, GUARD check ...[2024-11-17 19:16:15.000493] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:16.808 [2024-11-17 19:16:15.000566] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:16.808 passed 00:07:16.808 Test: verify: DIF not generated, APPTAG check ...[2024-11-17 19:16:15.000603] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:16.808 [2024-11-17 19:16:15.000630] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:16.808 passed 00:07:16.808 Test: verify: DIF not generated, REFTAG check ...[2024-11-17 19:16:15.000661] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:16.808 [2024-11-17 19:16:15.000716] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:16.808 passed 00:07:16.808 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:16.808 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-17 19:16:15.000779] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:16.808 passed 00:07:16.808 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:16.808 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:16.808 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:16.808 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-17 19:16:15.000911] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:16.808 passed 00:07:16.809 Test: generate copy: DIF generated, GUARD check ...passed 00:07:16.809 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:16.809 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:16.809 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:16.809 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:16.809 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:16.809 Test: generate copy: iovecs-len validate ...[2024-11-17 19:16:15.001150] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:16.809 passed 00:07:16.809 Test: generate copy: buffer alignment validate ...passed 00:07:16.809 00:07:16.809 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.809 suites 1 1 n/a 0 0 00:07:16.809 tests 20 20 20 0 0 00:07:16.809 asserts 204 204 204 0 n/a 00:07:16.809 00:07:16.809 Elapsed time = 0.002 seconds 00:07:17.067 00:07:17.067 real 0m0.490s 00:07:17.067 user 0m0.757s 00:07:17.067 sys 0m0.172s 00:07:17.067 19:16:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.067 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.067 ************************************ 00:07:17.067 END TEST accel_dif_functional_tests 00:07:17.067 ************************************ 00:07:17.067 00:07:17.067 real 0m59.880s 00:07:17.067 user 1m7.606s 00:07:17.067 sys 0m7.237s 00:07:17.067 19:16:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.067 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.067 ************************************ 00:07:17.067 END TEST accel 00:07:17.067 ************************************ 00:07:17.067 19:16:15 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:17.067 19:16:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.067 19:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.067 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.067 ************************************ 00:07:17.067 START TEST accel_rpc 00:07:17.067 ************************************ 00:07:17.067 19:16:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:17.067 * Looking for test storage... 00:07:17.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:17.067 19:16:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:17.067 19:16:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:17.067 19:16:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:17.325 19:16:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:17.325 19:16:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:17.325 19:16:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:17.326 19:16:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:17.326 19:16:15 -- scripts/common.sh@335 -- # IFS=.-: 00:07:17.326 19:16:15 -- scripts/common.sh@335 -- # read -ra ver1 00:07:17.326 19:16:15 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.326 19:16:15 -- scripts/common.sh@336 -- # read -ra ver2 00:07:17.326 19:16:15 -- scripts/common.sh@337 -- # local 'op=<' 00:07:17.326 19:16:15 -- scripts/common.sh@339 -- # ver1_l=2 00:07:17.326 19:16:15 -- scripts/common.sh@340 -- # ver2_l=1 00:07:17.326 19:16:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:17.326 19:16:15 -- scripts/common.sh@343 -- # case "$op" in 00:07:17.326 19:16:15 -- scripts/common.sh@344 -- # : 1 00:07:17.326 19:16:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:17.326 19:16:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.326 19:16:15 -- scripts/common.sh@364 -- # decimal 1 00:07:17.326 19:16:15 -- scripts/common.sh@352 -- # local d=1 00:07:17.326 19:16:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.326 19:16:15 -- scripts/common.sh@354 -- # echo 1 00:07:17.326 19:16:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:17.326 19:16:15 -- scripts/common.sh@365 -- # decimal 2 00:07:17.326 19:16:15 -- scripts/common.sh@352 -- # local d=2 00:07:17.326 19:16:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.326 19:16:15 -- scripts/common.sh@354 -- # echo 2 00:07:17.326 19:16:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:17.326 19:16:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:17.326 19:16:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:17.326 19:16:15 -- scripts/common.sh@367 -- # return 0 00:07:17.326 19:16:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.326 19:16:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:17.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.326 --rc genhtml_branch_coverage=1 00:07:17.326 --rc genhtml_function_coverage=1 00:07:17.326 --rc genhtml_legend=1 00:07:17.326 --rc geninfo_all_blocks=1 00:07:17.326 --rc geninfo_unexecuted_blocks=1 00:07:17.326 00:07:17.326 ' 00:07:17.326 19:16:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:17.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.326 --rc genhtml_branch_coverage=1 00:07:17.326 --rc genhtml_function_coverage=1 00:07:17.326 --rc genhtml_legend=1 00:07:17.326 --rc geninfo_all_blocks=1 00:07:17.326 --rc geninfo_unexecuted_blocks=1 00:07:17.326 00:07:17.326 ' 00:07:17.326 19:16:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:17.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.326 --rc genhtml_branch_coverage=1 00:07:17.326 --rc genhtml_function_coverage=1 00:07:17.326 --rc genhtml_legend=1 00:07:17.326 --rc geninfo_all_blocks=1 00:07:17.326 --rc geninfo_unexecuted_blocks=1 00:07:17.326 00:07:17.326 ' 00:07:17.326 19:16:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:17.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.326 --rc genhtml_branch_coverage=1 00:07:17.326 --rc genhtml_function_coverage=1 00:07:17.326 --rc genhtml_legend=1 00:07:17.326 --rc geninfo_all_blocks=1 00:07:17.326 --rc geninfo_unexecuted_blocks=1 00:07:17.326 00:07:17.326 ' 00:07:17.326 19:16:15 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.326 19:16:15 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1091705 00:07:17.326 19:16:15 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:17.326 19:16:15 -- accel/accel_rpc.sh@15 -- # waitforlisten 1091705 00:07:17.326 19:16:15 -- common/autotest_common.sh@829 -- # '[' -z 1091705 ']' 00:07:17.326 19:16:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.326 19:16:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.326 19:16:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.326 19:16:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.326 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.326 [2024-11-17 19:16:15.441611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:17.326 [2024-11-17 19:16:15.441736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091705 ] 00:07:17.326 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.326 [2024-11-17 19:16:15.497746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.326 [2024-11-17 19:16:15.581960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.326 [2024-11-17 19:16:15.582142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.584 19:16:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.584 19:16:15 -- common/autotest_common.sh@862 -- # return 0 00:07:17.584 19:16:15 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:17.584 19:16:15 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:17.584 19:16:15 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:17.584 19:16:15 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:17.584 19:16:15 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:17.584 19:16:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.584 19:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.584 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.584 ************************************ 00:07:17.584 START TEST accel_assign_opcode 00:07:17.584 ************************************ 00:07:17.584 19:16:15 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:17.584 19:16:15 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:17.584 19:16:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.584 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.584 [2024-11-17 19:16:15.650740] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:17.584 19:16:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.584 19:16:15 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:17.584 19:16:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.584 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.584 [2024-11-17 19:16:15.658756] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:17.584 19:16:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.584 19:16:15 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:17.584 19:16:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.584 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.842 19:16:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.842 19:16:15 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:17.842 19:16:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.842 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.842 19:16:15 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:17.842 19:16:15 -- accel/accel_rpc.sh@42 -- # grep software 00:07:17.842 19:16:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.842 software 00:07:17.842 00:07:17.842 real 0m0.291s 00:07:17.842 user 0m0.033s 00:07:17.842 sys 0m0.007s 00:07:17.842 19:16:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.842 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.842 ************************************ 00:07:17.842 END TEST accel_assign_opcode 00:07:17.842 ************************************ 00:07:17.842 19:16:15 -- accel/accel_rpc.sh@55 -- # killprocess 1091705 00:07:17.842 19:16:15 -- common/autotest_common.sh@936 -- # '[' -z 1091705 ']' 00:07:17.842 19:16:15 -- common/autotest_common.sh@940 -- # kill -0 1091705 00:07:17.842 19:16:15 -- common/autotest_common.sh@941 -- # uname 00:07:17.842 19:16:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.842 19:16:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1091705 00:07:17.842 19:16:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.842 19:16:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.842 19:16:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1091705' 00:07:17.843 killing process with pid 1091705 00:07:17.843 19:16:15 -- common/autotest_common.sh@955 -- # kill 1091705 00:07:17.843 19:16:15 -- common/autotest_common.sh@960 -- # wait 1091705 00:07:18.410 00:07:18.410 real 0m1.128s 00:07:18.410 user 0m1.028s 00:07:18.410 sys 0m0.438s 00:07:18.410 19:16:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.410 19:16:16 -- common/autotest_common.sh@10 -- # set +x 00:07:18.410 ************************************ 00:07:18.410 END TEST accel_rpc 00:07:18.410 ************************************ 00:07:18.410 19:16:16 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:18.410 19:16:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.410 19:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.410 19:16:16 -- common/autotest_common.sh@10 -- # set +x 00:07:18.410 ************************************ 00:07:18.410 START TEST app_cmdline 00:07:18.410 ************************************ 00:07:18.410 19:16:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:18.410 * Looking for test storage... 00:07:18.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:18.410 19:16:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:18.410 19:16:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:18.410 19:16:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:18.410 19:16:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:18.410 19:16:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:18.410 19:16:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:18.410 19:16:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:18.410 19:16:16 -- scripts/common.sh@335 -- # IFS=.-: 00:07:18.410 19:16:16 -- scripts/common.sh@335 -- # read -ra ver1 00:07:18.410 19:16:16 -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.410 19:16:16 -- scripts/common.sh@336 -- # read -ra ver2 00:07:18.410 19:16:16 -- scripts/common.sh@337 -- # local 'op=<' 00:07:18.410 19:16:16 -- scripts/common.sh@339 -- # ver1_l=2 00:07:18.410 19:16:16 -- scripts/common.sh@340 -- # ver2_l=1 00:07:18.410 19:16:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:18.410 19:16:16 -- scripts/common.sh@343 -- # case "$op" in 00:07:18.410 19:16:16 -- scripts/common.sh@344 -- # : 1 00:07:18.410 19:16:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:18.410 19:16:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.410 19:16:16 -- scripts/common.sh@364 -- # decimal 1 00:07:18.410 19:16:16 -- scripts/common.sh@352 -- # local d=1 00:07:18.410 19:16:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.410 19:16:16 -- scripts/common.sh@354 -- # echo 1 00:07:18.410 19:16:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:18.410 19:16:16 -- scripts/common.sh@365 -- # decimal 2 00:07:18.410 19:16:16 -- scripts/common.sh@352 -- # local d=2 00:07:18.410 19:16:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.410 19:16:16 -- scripts/common.sh@354 -- # echo 2 00:07:18.410 19:16:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:18.410 19:16:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:18.410 19:16:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:18.410 19:16:16 -- scripts/common.sh@367 -- # return 0 00:07:18.410 19:16:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.410 19:16:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:18.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.410 --rc genhtml_branch_coverage=1 00:07:18.410 --rc genhtml_function_coverage=1 00:07:18.410 --rc genhtml_legend=1 00:07:18.410 --rc geninfo_all_blocks=1 00:07:18.410 --rc geninfo_unexecuted_blocks=1 00:07:18.410 00:07:18.410 ' 00:07:18.410 19:16:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:18.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.410 --rc genhtml_branch_coverage=1 00:07:18.410 --rc genhtml_function_coverage=1 00:07:18.410 --rc genhtml_legend=1 00:07:18.410 --rc geninfo_all_blocks=1 00:07:18.410 --rc geninfo_unexecuted_blocks=1 00:07:18.410 00:07:18.410 ' 00:07:18.410 19:16:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:18.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.410 --rc genhtml_branch_coverage=1 00:07:18.410 --rc genhtml_function_coverage=1 00:07:18.410 --rc genhtml_legend=1 00:07:18.410 --rc geninfo_all_blocks=1 00:07:18.410 --rc geninfo_unexecuted_blocks=1 00:07:18.410 00:07:18.410 ' 00:07:18.410 19:16:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:18.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.410 --rc genhtml_branch_coverage=1 00:07:18.410 --rc genhtml_function_coverage=1 00:07:18.410 --rc genhtml_legend=1 00:07:18.410 --rc geninfo_all_blocks=1 00:07:18.410 --rc geninfo_unexecuted_blocks=1 00:07:18.410 00:07:18.410 ' 00:07:18.410 19:16:16 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:18.410 19:16:16 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1091916 00:07:18.410 19:16:16 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:18.411 19:16:16 -- app/cmdline.sh@18 -- # waitforlisten 1091916 00:07:18.411 19:16:16 -- common/autotest_common.sh@829 -- # '[' -z 1091916 ']' 00:07:18.411 19:16:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.411 19:16:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.411 19:16:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.411 19:16:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.411 19:16:16 -- common/autotest_common.sh@10 -- # set +x 00:07:18.411 [2024-11-17 19:16:16.591893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.411 [2024-11-17 19:16:16.591997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091916 ] 00:07:18.411 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.411 [2024-11-17 19:16:16.647932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.669 [2024-11-17 19:16:16.733718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.669 [2024-11-17 19:16:16.733892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.602 19:16:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.602 19:16:17 -- common/autotest_common.sh@862 -- # return 0 00:07:19.602 19:16:17 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:19.602 { 00:07:19.602 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:19.602 "fields": { 00:07:19.602 "major": 24, 00:07:19.602 "minor": 1, 00:07:19.602 "patch": 1, 00:07:19.602 "suffix": "-pre", 00:07:19.602 "commit": "c13c99a5e" 00:07:19.602 } 00:07:19.602 } 00:07:19.602 19:16:17 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:19.602 19:16:17 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:19.602 19:16:17 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:19.602 19:16:17 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:19.602 19:16:17 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:19.602 19:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.602 19:16:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.602 19:16:17 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:19.602 19:16:17 -- app/cmdline.sh@26 -- # sort 00:07:19.602 19:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.602 19:16:17 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:19.602 19:16:17 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:19.602 19:16:17 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:19.602 19:16:17 -- common/autotest_common.sh@650 -- # local es=0 00:07:19.602 19:16:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:19.602 19:16:17 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.602 19:16:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.602 19:16:17 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.602 19:16:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.602 19:16:17 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.603 19:16:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.603 19:16:17 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.603 19:16:17 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:19.603 19:16:17 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:19.860 request: 00:07:19.860 { 00:07:19.860 "method": "env_dpdk_get_mem_stats", 00:07:19.860 "req_id": 1 00:07:19.860 } 00:07:19.860 Got JSON-RPC error response 00:07:19.860 response: 00:07:19.860 { 00:07:19.860 "code": -32601, 00:07:19.860 "message": "Method not found" 00:07:19.860 } 00:07:19.860 19:16:18 -- common/autotest_common.sh@653 -- # es=1 00:07:19.860 19:16:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.860 19:16:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.860 19:16:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.860 19:16:18 -- app/cmdline.sh@1 -- # killprocess 1091916 00:07:19.860 19:16:18 -- common/autotest_common.sh@936 -- # '[' -z 1091916 ']' 00:07:19.860 19:16:18 -- common/autotest_common.sh@940 -- # kill -0 1091916 00:07:19.860 19:16:18 -- common/autotest_common.sh@941 -- # uname 00:07:19.860 19:16:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:19.860 19:16:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1091916 00:07:19.860 19:16:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:19.860 19:16:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:19.860 19:16:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1091916' 00:07:19.860 killing process with pid 1091916 00:07:19.860 19:16:18 -- common/autotest_common.sh@955 -- # kill 1091916 00:07:19.860 19:16:18 -- common/autotest_common.sh@960 -- # wait 1091916 00:07:20.427 00:07:20.427 real 0m2.094s 00:07:20.427 user 0m2.626s 00:07:20.427 sys 0m0.501s 00:07:20.427 19:16:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.427 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:07:20.427 ************************************ 00:07:20.427 END TEST app_cmdline 00:07:20.427 ************************************ 00:07:20.427 19:16:18 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:20.427 19:16:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:20.427 19:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.427 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:07:20.427 ************************************ 00:07:20.427 START TEST version 00:07:20.427 ************************************ 00:07:20.427 19:16:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:20.427 * Looking for test storage... 00:07:20.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:20.427 19:16:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:20.427 19:16:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:20.427 19:16:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:20.427 19:16:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:20.427 19:16:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:20.427 19:16:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:20.427 19:16:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:20.427 19:16:18 -- scripts/common.sh@335 -- # IFS=.-: 00:07:20.427 19:16:18 -- scripts/common.sh@335 -- # read -ra ver1 00:07:20.427 19:16:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.427 19:16:18 -- scripts/common.sh@336 -- # read -ra ver2 00:07:20.427 19:16:18 -- scripts/common.sh@337 -- # local 'op=<' 00:07:20.427 19:16:18 -- scripts/common.sh@339 -- # ver1_l=2 00:07:20.427 19:16:18 -- scripts/common.sh@340 -- # ver2_l=1 00:07:20.427 19:16:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:20.427 19:16:18 -- scripts/common.sh@343 -- # case "$op" in 00:07:20.427 19:16:18 -- scripts/common.sh@344 -- # : 1 00:07:20.427 19:16:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:20.427 19:16:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.427 19:16:18 -- scripts/common.sh@364 -- # decimal 1 00:07:20.427 19:16:18 -- scripts/common.sh@352 -- # local d=1 00:07:20.427 19:16:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.427 19:16:18 -- scripts/common.sh@354 -- # echo 1 00:07:20.427 19:16:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:20.427 19:16:18 -- scripts/common.sh@365 -- # decimal 2 00:07:20.427 19:16:18 -- scripts/common.sh@352 -- # local d=2 00:07:20.427 19:16:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.427 19:16:18 -- scripts/common.sh@354 -- # echo 2 00:07:20.427 19:16:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:20.427 19:16:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:20.427 19:16:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:20.427 19:16:18 -- scripts/common.sh@367 -- # return 0 00:07:20.427 19:16:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.427 19:16:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.427 --rc genhtml_branch_coverage=1 00:07:20.427 --rc genhtml_function_coverage=1 00:07:20.427 --rc genhtml_legend=1 00:07:20.427 --rc geninfo_all_blocks=1 00:07:20.427 --rc geninfo_unexecuted_blocks=1 00:07:20.427 00:07:20.427 ' 00:07:20.427 19:16:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.427 --rc genhtml_branch_coverage=1 00:07:20.427 --rc genhtml_function_coverage=1 00:07:20.427 --rc genhtml_legend=1 00:07:20.427 --rc geninfo_all_blocks=1 00:07:20.427 --rc geninfo_unexecuted_blocks=1 00:07:20.427 00:07:20.427 ' 00:07:20.427 19:16:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.427 --rc genhtml_branch_coverage=1 00:07:20.427 --rc genhtml_function_coverage=1 00:07:20.427 --rc genhtml_legend=1 00:07:20.427 --rc geninfo_all_blocks=1 00:07:20.427 --rc geninfo_unexecuted_blocks=1 00:07:20.427 00:07:20.427 ' 00:07:20.427 19:16:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.427 --rc genhtml_branch_coverage=1 00:07:20.427 --rc genhtml_function_coverage=1 00:07:20.427 --rc genhtml_legend=1 00:07:20.427 --rc geninfo_all_blocks=1 00:07:20.427 --rc geninfo_unexecuted_blocks=1 00:07:20.427 00:07:20.427 ' 00:07:20.427 19:16:18 -- app/version.sh@17 -- # get_header_version major 00:07:20.427 19:16:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:20.427 19:16:18 -- app/version.sh@14 -- # cut -f2 00:07:20.427 19:16:18 -- app/version.sh@14 -- # tr -d '"' 00:07:20.427 19:16:18 -- app/version.sh@17 -- # major=24 00:07:20.427 19:16:18 -- app/version.sh@18 -- # get_header_version minor 00:07:20.427 19:16:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:20.427 19:16:18 -- app/version.sh@14 -- # cut -f2 00:07:20.427 19:16:18 -- app/version.sh@14 -- # tr -d '"' 00:07:20.427 19:16:18 -- app/version.sh@18 -- # minor=1 00:07:20.427 19:16:18 -- app/version.sh@19 -- # get_header_version patch 00:07:20.427 19:16:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:20.427 19:16:18 -- app/version.sh@14 -- # cut -f2 00:07:20.427 19:16:18 -- app/version.sh@14 -- # tr -d '"' 00:07:20.427 19:16:18 -- app/version.sh@19 -- # patch=1 00:07:20.427 19:16:18 -- app/version.sh@20 -- # get_header_version suffix 00:07:20.427 19:16:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:20.427 19:16:18 -- app/version.sh@14 -- # cut -f2 00:07:20.427 19:16:18 -- app/version.sh@14 -- # tr -d '"' 00:07:20.686 19:16:18 -- app/version.sh@20 -- # suffix=-pre 00:07:20.686 19:16:18 -- app/version.sh@22 -- # version=24.1 00:07:20.686 19:16:18 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:20.686 19:16:18 -- app/version.sh@25 -- # version=24.1.1 00:07:20.686 19:16:18 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:20.686 19:16:18 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:20.686 19:16:18 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:20.686 19:16:18 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:20.686 19:16:18 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:20.686 00:07:20.686 real 0m0.191s 00:07:20.686 user 0m0.121s 00:07:20.686 sys 0m0.095s 00:07:20.686 19:16:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.686 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:07:20.686 ************************************ 00:07:20.686 END TEST version 00:07:20.686 ************************************ 00:07:20.686 19:16:18 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:20.686 19:16:18 -- spdk/autotest.sh@191 -- # uname -s 00:07:20.686 19:16:18 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:20.686 19:16:18 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:20.686 19:16:18 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:20.686 19:16:18 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:20.686 19:16:18 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:20.686 19:16:18 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:20.686 19:16:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:20.686 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:07:20.686 19:16:18 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:20.686 19:16:18 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:20.686 19:16:18 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:20.686 19:16:18 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:20.686 19:16:18 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:20.686 19:16:18 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:20.686 19:16:18 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:20.686 19:16:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:20.686 19:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.686 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:07:20.686 ************************************ 00:07:20.686 START TEST nvmf_tcp 00:07:20.686 ************************************ 00:07:20.686 19:16:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:20.686 * Looking for test storage... 00:07:20.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:20.686 19:16:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:20.686 19:16:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:20.686 19:16:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:20.686 19:16:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:20.686 19:16:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:20.686 19:16:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:20.686 19:16:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:20.686 19:16:18 -- scripts/common.sh@335 -- # IFS=.-: 00:07:20.686 19:16:18 -- scripts/common.sh@335 -- # read -ra ver1 00:07:20.686 19:16:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.686 19:16:18 -- scripts/common.sh@336 -- # read -ra ver2 00:07:20.686 19:16:18 -- scripts/common.sh@337 -- # local 'op=<' 00:07:20.686 19:16:18 -- scripts/common.sh@339 -- # ver1_l=2 00:07:20.686 19:16:18 -- scripts/common.sh@340 -- # ver2_l=1 00:07:20.686 19:16:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:20.686 19:16:18 -- scripts/common.sh@343 -- # case "$op" in 00:07:20.686 19:16:18 -- scripts/common.sh@344 -- # : 1 00:07:20.686 19:16:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:20.686 19:16:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.686 19:16:18 -- scripts/common.sh@364 -- # decimal 1 00:07:20.686 19:16:18 -- scripts/common.sh@352 -- # local d=1 00:07:20.686 19:16:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.686 19:16:18 -- scripts/common.sh@354 -- # echo 1 00:07:20.686 19:16:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:20.686 19:16:18 -- scripts/common.sh@365 -- # decimal 2 00:07:20.686 19:16:18 -- scripts/common.sh@352 -- # local d=2 00:07:20.686 19:16:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.686 19:16:18 -- scripts/common.sh@354 -- # echo 2 00:07:20.686 19:16:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:20.686 19:16:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:20.687 19:16:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:20.687 19:16:18 -- scripts/common.sh@367 -- # return 0 00:07:20.687 19:16:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.687 19:16:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:20.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.687 --rc genhtml_branch_coverage=1 00:07:20.687 --rc genhtml_function_coverage=1 00:07:20.687 --rc genhtml_legend=1 00:07:20.687 --rc geninfo_all_blocks=1 00:07:20.687 --rc geninfo_unexecuted_blocks=1 00:07:20.687 00:07:20.687 ' 00:07:20.687 19:16:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:20.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.687 --rc genhtml_branch_coverage=1 00:07:20.687 --rc genhtml_function_coverage=1 00:07:20.687 --rc genhtml_legend=1 00:07:20.687 --rc geninfo_all_blocks=1 00:07:20.687 --rc geninfo_unexecuted_blocks=1 00:07:20.687 00:07:20.687 ' 00:07:20.687 19:16:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:20.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.687 --rc genhtml_branch_coverage=1 00:07:20.687 --rc genhtml_function_coverage=1 00:07:20.687 --rc genhtml_legend=1 00:07:20.687 --rc geninfo_all_blocks=1 00:07:20.687 --rc geninfo_unexecuted_blocks=1 00:07:20.687 00:07:20.687 ' 00:07:20.687 19:16:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:20.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.687 --rc genhtml_branch_coverage=1 00:07:20.687 --rc genhtml_function_coverage=1 00:07:20.687 --rc genhtml_legend=1 00:07:20.687 --rc geninfo_all_blocks=1 00:07:20.687 --rc geninfo_unexecuted_blocks=1 00:07:20.687 00:07:20.687 ' 00:07:20.687 19:16:18 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:20.687 19:16:18 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:20.687 19:16:18 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.687 19:16:18 -- nvmf/common.sh@7 -- # uname -s 00:07:20.687 19:16:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.687 19:16:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.687 19:16:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.687 19:16:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.687 19:16:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.687 19:16:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.687 19:16:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.687 19:16:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.687 19:16:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.687 19:16:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.687 19:16:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:20.687 19:16:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:20.687 19:16:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.687 19:16:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.687 19:16:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.687 19:16:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.687 19:16:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.687 19:16:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.687 19:16:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.687 19:16:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.687 19:16:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.687 19:16:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.687 19:16:18 -- paths/export.sh@5 -- # export PATH 00:07:20.687 19:16:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.687 19:16:18 -- nvmf/common.sh@46 -- # : 0 00:07:20.687 19:16:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:20.687 19:16:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:20.687 19:16:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:20.687 19:16:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.687 19:16:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.687 19:16:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:20.687 19:16:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:20.687 19:16:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:20.687 19:16:18 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:20.687 19:16:18 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:20.687 19:16:18 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:20.687 19:16:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:20.687 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:07:20.687 19:16:18 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:20.687 19:16:18 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:20.687 19:16:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:20.687 19:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.687 19:16:18 -- common/autotest_common.sh@10 -- # set +x 00:07:20.687 ************************************ 00:07:20.687 START TEST nvmf_example 00:07:20.687 ************************************ 00:07:20.687 19:16:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:20.946 * Looking for test storage... 00:07:20.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.946 19:16:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:20.946 19:16:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:20.946 19:16:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:20.946 19:16:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:20.946 19:16:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:20.946 19:16:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:20.946 19:16:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:20.946 19:16:19 -- scripts/common.sh@335 -- # IFS=.-: 00:07:20.946 19:16:19 -- scripts/common.sh@335 -- # read -ra ver1 00:07:20.946 19:16:19 -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.946 19:16:19 -- scripts/common.sh@336 -- # read -ra ver2 00:07:20.946 19:16:19 -- scripts/common.sh@337 -- # local 'op=<' 00:07:20.946 19:16:19 -- scripts/common.sh@339 -- # ver1_l=2 00:07:20.946 19:16:19 -- scripts/common.sh@340 -- # ver2_l=1 00:07:20.946 19:16:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:20.946 19:16:19 -- scripts/common.sh@343 -- # case "$op" in 00:07:20.946 19:16:19 -- scripts/common.sh@344 -- # : 1 00:07:20.946 19:16:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:20.946 19:16:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.946 19:16:19 -- scripts/common.sh@364 -- # decimal 1 00:07:20.946 19:16:19 -- scripts/common.sh@352 -- # local d=1 00:07:20.946 19:16:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.946 19:16:19 -- scripts/common.sh@354 -- # echo 1 00:07:20.946 19:16:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:20.946 19:16:19 -- scripts/common.sh@365 -- # decimal 2 00:07:20.946 19:16:19 -- scripts/common.sh@352 -- # local d=2 00:07:20.946 19:16:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.946 19:16:19 -- scripts/common.sh@354 -- # echo 2 00:07:20.946 19:16:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:20.946 19:16:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:20.946 19:16:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:20.946 19:16:19 -- scripts/common.sh@367 -- # return 0 00:07:20.946 19:16:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.946 19:16:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:20.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.946 --rc genhtml_branch_coverage=1 00:07:20.946 --rc genhtml_function_coverage=1 00:07:20.946 --rc genhtml_legend=1 00:07:20.946 --rc geninfo_all_blocks=1 00:07:20.946 --rc geninfo_unexecuted_blocks=1 00:07:20.946 00:07:20.946 ' 00:07:20.946 19:16:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:20.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.946 --rc genhtml_branch_coverage=1 00:07:20.946 --rc genhtml_function_coverage=1 00:07:20.946 --rc genhtml_legend=1 00:07:20.946 --rc geninfo_all_blocks=1 00:07:20.946 --rc geninfo_unexecuted_blocks=1 00:07:20.946 00:07:20.946 ' 00:07:20.946 19:16:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:20.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.946 --rc genhtml_branch_coverage=1 00:07:20.946 --rc genhtml_function_coverage=1 00:07:20.946 --rc genhtml_legend=1 00:07:20.946 --rc geninfo_all_blocks=1 00:07:20.946 --rc geninfo_unexecuted_blocks=1 00:07:20.946 00:07:20.946 ' 00:07:20.946 19:16:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:20.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.946 --rc genhtml_branch_coverage=1 00:07:20.946 --rc genhtml_function_coverage=1 00:07:20.946 --rc genhtml_legend=1 00:07:20.946 --rc geninfo_all_blocks=1 00:07:20.946 --rc geninfo_unexecuted_blocks=1 00:07:20.946 00:07:20.946 ' 00:07:20.946 19:16:19 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.946 19:16:19 -- nvmf/common.sh@7 -- # uname -s 00:07:20.946 19:16:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.946 19:16:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.946 19:16:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.946 19:16:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.946 19:16:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.946 19:16:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.946 19:16:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.946 19:16:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.946 19:16:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.946 19:16:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.946 19:16:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:20.946 19:16:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:20.946 19:16:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.946 19:16:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.946 19:16:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.946 19:16:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.946 19:16:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.946 19:16:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.946 19:16:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.946 19:16:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.946 19:16:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.946 19:16:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.946 19:16:19 -- paths/export.sh@5 -- # export PATH 00:07:20.947 19:16:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.947 19:16:19 -- nvmf/common.sh@46 -- # : 0 00:07:20.947 19:16:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:20.947 19:16:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:20.947 19:16:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:20.947 19:16:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.947 19:16:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.947 19:16:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:20.947 19:16:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:20.947 19:16:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:20.947 19:16:19 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:20.947 19:16:19 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:20.947 19:16:19 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:20.947 19:16:19 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:20.947 19:16:19 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:20.947 19:16:19 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:20.947 19:16:19 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:20.947 19:16:19 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:20.947 19:16:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:20.947 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.947 19:16:19 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:20.947 19:16:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:20.947 19:16:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.947 19:16:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:20.947 19:16:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:20.947 19:16:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:20.947 19:16:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.947 19:16:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.947 19:16:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.947 19:16:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:20.947 19:16:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:20.947 19:16:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:20.947 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:07:22.846 19:16:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:22.846 19:16:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:22.846 19:16:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:22.846 19:16:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:22.846 19:16:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:22.846 19:16:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:22.846 19:16:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:22.846 19:16:21 -- nvmf/common.sh@294 -- # net_devs=() 00:07:22.846 19:16:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:22.846 19:16:21 -- nvmf/common.sh@295 -- # e810=() 00:07:22.846 19:16:21 -- nvmf/common.sh@295 -- # local -ga e810 00:07:22.846 19:16:21 -- nvmf/common.sh@296 -- # x722=() 00:07:22.846 19:16:21 -- nvmf/common.sh@296 -- # local -ga x722 00:07:22.846 19:16:21 -- nvmf/common.sh@297 -- # mlx=() 00:07:22.846 19:16:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:22.846 19:16:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.846 19:16:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.846 19:16:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.846 19:16:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.846 19:16:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.846 19:16:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.846 19:16:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.846 19:16:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.847 19:16:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.847 19:16:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.847 19:16:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.847 19:16:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:22.847 19:16:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:22.847 19:16:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:22.847 19:16:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:22.847 19:16:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:22.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:22.847 19:16:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:22.847 19:16:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:22.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:22.847 19:16:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:22.847 19:16:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:22.847 19:16:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.847 19:16:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:22.847 19:16:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.847 19:16:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:22.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:22.847 19:16:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.847 19:16:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:22.847 19:16:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.847 19:16:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:22.847 19:16:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.847 19:16:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:22.847 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:22.847 19:16:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.847 19:16:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:22.847 19:16:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:22.847 19:16:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:22.847 19:16:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:22.847 19:16:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.847 19:16:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.847 19:16:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.847 19:16:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:22.847 19:16:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.847 19:16:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.847 19:16:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:22.847 19:16:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.847 19:16:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.847 19:16:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:22.847 19:16:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:22.847 19:16:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.847 19:16:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.847 19:16:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.847 19:16:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.847 19:16:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:22.847 19:16:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.105 19:16:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.105 19:16:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.105 19:16:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:23.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:07:23.105 00:07:23.105 --- 10.0.0.2 ping statistics --- 00:07:23.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.105 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:23.105 19:16:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:07:23.105 00:07:23.105 --- 10.0.0.1 ping statistics --- 00:07:23.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.105 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:23.105 19:16:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.105 19:16:21 -- nvmf/common.sh@410 -- # return 0 00:07:23.105 19:16:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:23.105 19:16:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.105 19:16:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:23.105 19:16:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:23.105 19:16:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.105 19:16:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:23.105 19:16:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:23.105 19:16:21 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:23.105 19:16:21 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:23.106 19:16:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.106 19:16:21 -- common/autotest_common.sh@10 -- # set +x 00:07:23.106 19:16:21 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:23.106 19:16:21 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:23.106 19:16:21 -- target/nvmf_example.sh@34 -- # nvmfpid=1093979 00:07:23.106 19:16:21 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:23.106 19:16:21 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.106 19:16:21 -- target/nvmf_example.sh@36 -- # waitforlisten 1093979 00:07:23.106 19:16:21 -- common/autotest_common.sh@829 -- # '[' -z 1093979 ']' 00:07:23.106 19:16:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.106 19:16:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.106 19:16:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.106 19:16:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.106 19:16:21 -- common/autotest_common.sh@10 -- # set +x 00:07:23.106 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.039 19:16:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.039 19:16:22 -- common/autotest_common.sh@862 -- # return 0 00:07:24.039 19:16:22 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:24.039 19:16:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.039 19:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.039 19:16:22 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.039 19:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.039 19:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.039 19:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.039 19:16:22 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:24.039 19:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.039 19:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.039 19:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.039 19:16:22 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:24.039 19:16:22 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:24.039 19:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.039 19:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.039 19:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.039 19:16:22 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:24.039 19:16:22 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:24.039 19:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.039 19:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.039 19:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.039 19:16:22 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.039 19:16:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.039 19:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:24.039 19:16:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.039 19:16:22 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:24.039 19:16:22 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:24.039 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.312 Initializing NVMe Controllers 00:07:36.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:36.312 Initialization complete. Launching workers. 00:07:36.312 ======================================================== 00:07:36.312 Latency(us) 00:07:36.312 Device Information : IOPS MiB/s Average min max 00:07:36.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15746.75 61.51 4064.35 755.59 16284.35 00:07:36.312 ======================================================== 00:07:36.312 Total : 15746.75 61.51 4064.35 755.59 16284.35 00:07:36.312 00:07:36.312 19:16:32 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:36.312 19:16:32 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:36.312 19:16:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:36.312 19:16:32 -- nvmf/common.sh@116 -- # sync 00:07:36.312 19:16:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:36.312 19:16:32 -- nvmf/common.sh@119 -- # set +e 00:07:36.312 19:16:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:36.312 19:16:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:36.312 rmmod nvme_tcp 00:07:36.312 rmmod nvme_fabrics 00:07:36.312 rmmod nvme_keyring 00:07:36.312 19:16:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:36.312 19:16:32 -- nvmf/common.sh@123 -- # set -e 00:07:36.312 19:16:32 -- nvmf/common.sh@124 -- # return 0 00:07:36.312 19:16:32 -- nvmf/common.sh@477 -- # '[' -n 1093979 ']' 00:07:36.312 19:16:32 -- nvmf/common.sh@478 -- # killprocess 1093979 00:07:36.312 19:16:32 -- common/autotest_common.sh@936 -- # '[' -z 1093979 ']' 00:07:36.312 19:16:32 -- common/autotest_common.sh@940 -- # kill -0 1093979 00:07:36.312 19:16:32 -- common/autotest_common.sh@941 -- # uname 00:07:36.313 19:16:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:36.313 19:16:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1093979 00:07:36.313 19:16:32 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:36.313 19:16:32 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:36.313 19:16:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1093979' 00:07:36.313 killing process with pid 1093979 00:07:36.313 19:16:32 -- common/autotest_common.sh@955 -- # kill 1093979 00:07:36.313 19:16:32 -- common/autotest_common.sh@960 -- # wait 1093979 00:07:36.313 nvmf threads initialize successfully 00:07:36.313 bdev subsystem init successfully 00:07:36.313 created a nvmf target service 00:07:36.313 create targets's poll groups done 00:07:36.313 all subsystems of target started 00:07:36.313 nvmf target is running 00:07:36.313 all subsystems of target stopped 00:07:36.313 destroy targets's poll groups done 00:07:36.313 destroyed the nvmf target service 00:07:36.313 bdev subsystem finish successfully 00:07:36.313 nvmf threads destroy successfully 00:07:36.313 19:16:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:36.313 19:16:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:36.313 19:16:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:36.313 19:16:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:36.313 19:16:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:36.313 19:16:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.313 19:16:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.313 19:16:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.883 19:16:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:36.883 19:16:34 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:36.883 19:16:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.883 19:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:36.883 00:07:36.883 real 0m15.947s 00:07:36.883 user 0m45.406s 00:07:36.883 sys 0m3.190s 00:07:36.883 19:16:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.883 19:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:36.883 ************************************ 00:07:36.883 END TEST nvmf_example 00:07:36.883 ************************************ 00:07:36.883 19:16:34 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:36.883 19:16:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:36.883 19:16:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.883 19:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:36.883 ************************************ 00:07:36.883 START TEST nvmf_filesystem 00:07:36.883 ************************************ 00:07:36.883 19:16:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:36.883 * Looking for test storage... 00:07:36.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.883 19:16:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:36.883 19:16:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:36.883 19:16:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:36.883 19:16:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:36.883 19:16:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:36.883 19:16:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:36.883 19:16:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:36.883 19:16:35 -- scripts/common.sh@335 -- # IFS=.-: 00:07:36.883 19:16:35 -- scripts/common.sh@335 -- # read -ra ver1 00:07:36.883 19:16:35 -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.883 19:16:35 -- scripts/common.sh@336 -- # read -ra ver2 00:07:36.883 19:16:35 -- scripts/common.sh@337 -- # local 'op=<' 00:07:36.883 19:16:35 -- scripts/common.sh@339 -- # ver1_l=2 00:07:36.883 19:16:35 -- scripts/common.sh@340 -- # ver2_l=1 00:07:36.883 19:16:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:36.883 19:16:35 -- scripts/common.sh@343 -- # case "$op" in 00:07:36.883 19:16:35 -- scripts/common.sh@344 -- # : 1 00:07:36.883 19:16:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:36.883 19:16:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.883 19:16:35 -- scripts/common.sh@364 -- # decimal 1 00:07:36.883 19:16:35 -- scripts/common.sh@352 -- # local d=1 00:07:36.883 19:16:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.883 19:16:35 -- scripts/common.sh@354 -- # echo 1 00:07:36.883 19:16:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:36.883 19:16:35 -- scripts/common.sh@365 -- # decimal 2 00:07:36.883 19:16:35 -- scripts/common.sh@352 -- # local d=2 00:07:36.883 19:16:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.883 19:16:35 -- scripts/common.sh@354 -- # echo 2 00:07:36.883 19:16:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:36.883 19:16:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:36.883 19:16:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:36.883 19:16:35 -- scripts/common.sh@367 -- # return 0 00:07:36.883 19:16:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.883 19:16:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:36.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.883 --rc genhtml_branch_coverage=1 00:07:36.883 --rc genhtml_function_coverage=1 00:07:36.883 --rc genhtml_legend=1 00:07:36.883 --rc geninfo_all_blocks=1 00:07:36.883 --rc geninfo_unexecuted_blocks=1 00:07:36.883 00:07:36.883 ' 00:07:36.883 19:16:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:36.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.883 --rc genhtml_branch_coverage=1 00:07:36.883 --rc genhtml_function_coverage=1 00:07:36.883 --rc genhtml_legend=1 00:07:36.883 --rc geninfo_all_blocks=1 00:07:36.883 --rc geninfo_unexecuted_blocks=1 00:07:36.883 00:07:36.883 ' 00:07:36.883 19:16:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:36.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.883 --rc genhtml_branch_coverage=1 00:07:36.883 --rc genhtml_function_coverage=1 00:07:36.883 --rc genhtml_legend=1 00:07:36.883 --rc geninfo_all_blocks=1 00:07:36.883 --rc geninfo_unexecuted_blocks=1 00:07:36.883 00:07:36.883 ' 00:07:36.883 19:16:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:36.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.883 --rc genhtml_branch_coverage=1 00:07:36.883 --rc genhtml_function_coverage=1 00:07:36.883 --rc genhtml_legend=1 00:07:36.883 --rc geninfo_all_blocks=1 00:07:36.883 --rc geninfo_unexecuted_blocks=1 00:07:36.883 00:07:36.883 ' 00:07:36.883 19:16:35 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:36.883 19:16:35 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:36.883 19:16:35 -- common/autotest_common.sh@34 -- # set -e 00:07:36.883 19:16:35 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:36.883 19:16:35 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:36.883 19:16:35 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:36.883 19:16:35 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:36.883 19:16:35 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:36.883 19:16:35 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:36.883 19:16:35 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:36.883 19:16:35 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:36.883 19:16:35 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:36.883 19:16:35 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:36.883 19:16:35 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:36.883 19:16:35 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:36.883 19:16:35 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:36.883 19:16:35 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:36.883 19:16:35 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:36.883 19:16:35 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:36.883 19:16:35 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:36.883 19:16:35 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:36.883 19:16:35 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:36.883 19:16:35 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:36.883 19:16:35 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:36.883 19:16:35 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:36.883 19:16:35 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:36.884 19:16:35 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:36.884 19:16:35 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:36.884 19:16:35 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:36.884 19:16:35 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:36.884 19:16:35 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:36.884 19:16:35 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:36.884 19:16:35 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:36.884 19:16:35 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:36.884 19:16:35 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:36.884 19:16:35 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:36.884 19:16:35 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:36.884 19:16:35 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:36.884 19:16:35 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:36.884 19:16:35 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:36.884 19:16:35 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:36.884 19:16:35 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:36.884 19:16:35 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:36.884 19:16:35 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:36.884 19:16:35 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:36.884 19:16:35 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:36.884 19:16:35 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:36.884 19:16:35 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:36.884 19:16:35 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:36.884 19:16:35 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:36.884 19:16:35 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:36.884 19:16:35 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:36.884 19:16:35 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:36.884 19:16:35 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:36.884 19:16:35 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:36.884 19:16:35 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:36.884 19:16:35 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:36.884 19:16:35 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:36.884 19:16:35 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:36.884 19:16:35 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:36.884 19:16:35 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:36.884 19:16:35 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:36.884 19:16:35 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:36.884 19:16:35 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:36.884 19:16:35 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:36.884 19:16:35 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:36.884 19:16:35 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:36.884 19:16:35 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:36.884 19:16:35 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:36.884 19:16:35 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:36.884 19:16:35 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:36.884 19:16:35 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:36.884 19:16:35 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:36.884 19:16:35 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:36.884 19:16:35 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:36.884 19:16:35 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:36.884 19:16:35 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:36.884 19:16:35 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:36.884 19:16:35 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:36.884 19:16:35 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:36.884 19:16:35 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:36.884 19:16:35 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:36.884 19:16:35 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:36.884 19:16:35 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:36.884 19:16:35 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:36.884 19:16:35 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:36.884 19:16:35 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:36.884 19:16:35 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:36.884 19:16:35 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:36.884 19:16:35 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:36.884 19:16:35 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:36.884 19:16:35 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:36.884 19:16:35 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:36.884 19:16:35 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:36.884 19:16:35 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:36.884 19:16:35 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:36.884 19:16:35 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:36.884 19:16:35 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:36.884 19:16:35 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:36.884 19:16:35 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:36.884 19:16:35 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:36.884 19:16:35 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:36.884 #define SPDK_CONFIG_H 00:07:36.884 #define SPDK_CONFIG_APPS 1 00:07:36.884 #define SPDK_CONFIG_ARCH native 00:07:36.884 #undef SPDK_CONFIG_ASAN 00:07:36.884 #undef SPDK_CONFIG_AVAHI 00:07:36.884 #undef SPDK_CONFIG_CET 00:07:36.884 #define SPDK_CONFIG_COVERAGE 1 00:07:36.884 #define SPDK_CONFIG_CROSS_PREFIX 00:07:36.884 #undef SPDK_CONFIG_CRYPTO 00:07:36.884 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:36.884 #undef SPDK_CONFIG_CUSTOMOCF 00:07:36.884 #undef SPDK_CONFIG_DAOS 00:07:36.884 #define SPDK_CONFIG_DAOS_DIR 00:07:36.884 #define SPDK_CONFIG_DEBUG 1 00:07:36.884 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:36.884 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:36.884 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:36.884 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:36.884 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:36.884 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:36.884 #define SPDK_CONFIG_EXAMPLES 1 00:07:36.884 #undef SPDK_CONFIG_FC 00:07:36.884 #define SPDK_CONFIG_FC_PATH 00:07:36.884 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:36.884 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:36.884 #undef SPDK_CONFIG_FUSE 00:07:36.884 #undef SPDK_CONFIG_FUZZER 00:07:36.884 #define SPDK_CONFIG_FUZZER_LIB 00:07:36.884 #undef SPDK_CONFIG_GOLANG 00:07:36.884 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:36.884 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:36.884 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:36.884 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:36.884 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:36.884 #define SPDK_CONFIG_IDXD 1 00:07:36.884 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:36.884 #undef SPDK_CONFIG_IPSEC_MB 00:07:36.884 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:36.884 #define SPDK_CONFIG_ISAL 1 00:07:36.884 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:36.884 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:36.884 #define SPDK_CONFIG_LIBDIR 00:07:36.884 #undef SPDK_CONFIG_LTO 00:07:36.884 #define SPDK_CONFIG_MAX_LCORES 00:07:36.884 #define SPDK_CONFIG_NVME_CUSE 1 00:07:36.884 #undef SPDK_CONFIG_OCF 00:07:36.884 #define SPDK_CONFIG_OCF_PATH 00:07:36.884 #define SPDK_CONFIG_OPENSSL_PATH 00:07:36.884 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:36.884 #undef SPDK_CONFIG_PGO_USE 00:07:36.884 #define SPDK_CONFIG_PREFIX /usr/local 00:07:36.884 #undef SPDK_CONFIG_RAID5F 00:07:36.884 #undef SPDK_CONFIG_RBD 00:07:36.884 #define SPDK_CONFIG_RDMA 1 00:07:36.884 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:36.884 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:36.884 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:36.884 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:36.884 #define SPDK_CONFIG_SHARED 1 00:07:36.884 #undef SPDK_CONFIG_SMA 00:07:36.884 #define SPDK_CONFIG_TESTS 1 00:07:36.884 #undef SPDK_CONFIG_TSAN 00:07:36.884 #define SPDK_CONFIG_UBLK 1 00:07:36.884 #define SPDK_CONFIG_UBSAN 1 00:07:36.884 #undef SPDK_CONFIG_UNIT_TESTS 00:07:36.884 #undef SPDK_CONFIG_URING 00:07:36.884 #define SPDK_CONFIG_URING_PATH 00:07:36.884 #undef SPDK_CONFIG_URING_ZNS 00:07:36.884 #undef SPDK_CONFIG_USDT 00:07:36.884 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:36.884 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:36.884 #define SPDK_CONFIG_VFIO_USER 1 00:07:36.884 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:36.884 #define SPDK_CONFIG_VHOST 1 00:07:36.884 #define SPDK_CONFIG_VIRTIO 1 00:07:36.884 #undef SPDK_CONFIG_VTUNE 00:07:36.884 #define SPDK_CONFIG_VTUNE_DIR 00:07:36.884 #define SPDK_CONFIG_WERROR 1 00:07:36.884 #define SPDK_CONFIG_WPDK_DIR 00:07:36.884 #undef SPDK_CONFIG_XNVME 00:07:36.884 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:36.884 19:16:35 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:36.884 19:16:35 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.884 19:16:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.884 19:16:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.884 19:16:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.885 19:16:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.885 19:16:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.885 19:16:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.885 19:16:35 -- paths/export.sh@5 -- # export PATH 00:07:36.885 19:16:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.885 19:16:35 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:36.885 19:16:35 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:36.885 19:16:35 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:36.885 19:16:35 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:36.885 19:16:35 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:36.885 19:16:35 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:36.885 19:16:35 -- pm/common@16 -- # TEST_TAG=N/A 00:07:36.885 19:16:35 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:36.885 19:16:35 -- common/autotest_common.sh@52 -- # : 1 00:07:36.885 19:16:35 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:36.885 19:16:35 -- common/autotest_common.sh@56 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:36.885 19:16:35 -- common/autotest_common.sh@58 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:36.885 19:16:35 -- common/autotest_common.sh@60 -- # : 1 00:07:36.885 19:16:35 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:36.885 19:16:35 -- common/autotest_common.sh@62 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:36.885 19:16:35 -- common/autotest_common.sh@64 -- # : 00:07:36.885 19:16:35 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:36.885 19:16:35 -- common/autotest_common.sh@66 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:36.885 19:16:35 -- common/autotest_common.sh@68 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:36.885 19:16:35 -- common/autotest_common.sh@70 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:36.885 19:16:35 -- common/autotest_common.sh@72 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:36.885 19:16:35 -- common/autotest_common.sh@74 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:36.885 19:16:35 -- common/autotest_common.sh@76 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:36.885 19:16:35 -- common/autotest_common.sh@78 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:36.885 19:16:35 -- common/autotest_common.sh@80 -- # : 1 00:07:36.885 19:16:35 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:36.885 19:16:35 -- common/autotest_common.sh@82 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:36.885 19:16:35 -- common/autotest_common.sh@84 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:36.885 19:16:35 -- common/autotest_common.sh@86 -- # : 1 00:07:36.885 19:16:35 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:36.885 19:16:35 -- common/autotest_common.sh@88 -- # : 1 00:07:36.885 19:16:35 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:36.885 19:16:35 -- common/autotest_common.sh@90 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:36.885 19:16:35 -- common/autotest_common.sh@92 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:36.885 19:16:35 -- common/autotest_common.sh@94 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:36.885 19:16:35 -- common/autotest_common.sh@96 -- # : tcp 00:07:36.885 19:16:35 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:36.885 19:16:35 -- common/autotest_common.sh@98 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:36.885 19:16:35 -- common/autotest_common.sh@100 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:36.885 19:16:35 -- common/autotest_common.sh@102 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:36.885 19:16:35 -- common/autotest_common.sh@104 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:36.885 19:16:35 -- common/autotest_common.sh@106 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:36.885 19:16:35 -- common/autotest_common.sh@108 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:36.885 19:16:35 -- common/autotest_common.sh@110 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:36.885 19:16:35 -- common/autotest_common.sh@112 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:36.885 19:16:35 -- common/autotest_common.sh@114 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:36.885 19:16:35 -- common/autotest_common.sh@116 -- # : 1 00:07:36.885 19:16:35 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:36.885 19:16:35 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:36.885 19:16:35 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:36.885 19:16:35 -- common/autotest_common.sh@120 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:36.885 19:16:35 -- common/autotest_common.sh@122 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:36.885 19:16:35 -- common/autotest_common.sh@124 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:36.885 19:16:35 -- common/autotest_common.sh@126 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:36.885 19:16:35 -- common/autotest_common.sh@128 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:36.885 19:16:35 -- common/autotest_common.sh@130 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:36.885 19:16:35 -- common/autotest_common.sh@132 -- # : v22.11.4 00:07:36.885 19:16:35 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:36.885 19:16:35 -- common/autotest_common.sh@134 -- # : true 00:07:36.885 19:16:35 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:36.885 19:16:35 -- common/autotest_common.sh@136 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:36.885 19:16:35 -- common/autotest_common.sh@138 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:36.885 19:16:35 -- common/autotest_common.sh@140 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:36.885 19:16:35 -- common/autotest_common.sh@142 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:36.885 19:16:35 -- common/autotest_common.sh@144 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:36.885 19:16:35 -- common/autotest_common.sh@146 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:36.885 19:16:35 -- common/autotest_common.sh@148 -- # : e810 00:07:36.885 19:16:35 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:36.885 19:16:35 -- common/autotest_common.sh@150 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:36.885 19:16:35 -- common/autotest_common.sh@152 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:36.885 19:16:35 -- common/autotest_common.sh@154 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:36.885 19:16:35 -- common/autotest_common.sh@156 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:36.885 19:16:35 -- common/autotest_common.sh@158 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:36.885 19:16:35 -- common/autotest_common.sh@160 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:36.885 19:16:35 -- common/autotest_common.sh@163 -- # : 00:07:36.885 19:16:35 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:36.885 19:16:35 -- common/autotest_common.sh@165 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:36.885 19:16:35 -- common/autotest_common.sh@167 -- # : 0 00:07:36.885 19:16:35 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:36.885 19:16:35 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:36.885 19:16:35 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:36.886 19:16:35 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:36.886 19:16:35 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:36.886 19:16:35 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.886 19:16:35 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.886 19:16:35 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.886 19:16:35 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.886 19:16:35 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.886 19:16:35 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.886 19:16:35 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:36.886 19:16:35 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:36.886 19:16:35 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:36.886 19:16:35 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:36.886 19:16:35 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.886 19:16:35 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.886 19:16:35 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.886 19:16:35 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.886 19:16:35 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:36.886 19:16:35 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:36.886 19:16:35 -- common/autotest_common.sh@196 -- # cat 00:07:36.886 19:16:35 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:36.886 19:16:35 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.886 19:16:35 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.886 19:16:35 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.886 19:16:35 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.886 19:16:35 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:36.886 19:16:35 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:36.886 19:16:35 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:36.886 19:16:35 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:36.886 19:16:35 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:36.886 19:16:35 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:36.886 19:16:35 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.886 19:16:35 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.886 19:16:35 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.886 19:16:35 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.886 19:16:35 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.886 19:16:35 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.886 19:16:35 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.886 19:16:35 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.886 19:16:35 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:36.886 19:16:35 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:36.886 19:16:35 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:36.886 19:16:35 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:36.886 19:16:35 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:36.886 19:16:35 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:36.886 19:16:35 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:36.886 19:16:35 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:36.886 19:16:35 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:36.886 19:16:35 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:36.886 19:16:35 -- common/autotest_common.sh@259 -- # valgrind= 00:07:36.886 19:16:35 -- common/autotest_common.sh@265 -- # uname -s 00:07:36.886 19:16:35 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:36.886 19:16:35 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:36.886 19:16:35 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:36.886 19:16:35 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:36.886 19:16:35 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:36.886 19:16:35 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:36.886 19:16:35 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:36.886 19:16:35 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j48 00:07:36.886 19:16:35 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:36.886 19:16:35 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:36.886 19:16:35 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:36.886 19:16:35 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:36.886 19:16:35 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:36.886 19:16:35 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:36.886 19:16:35 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:36.886 19:16:35 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:36.886 19:16:35 -- common/autotest_common.sh@319 -- # [[ -z 1095729 ]] 00:07:36.886 19:16:35 -- common/autotest_common.sh@319 -- # kill -0 1095729 00:07:36.886 19:16:35 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:36.886 19:16:35 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:36.886 19:16:35 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:36.886 19:16:35 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:36.886 19:16:35 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:36.886 19:16:35 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:36.886 19:16:35 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:36.886 19:16:35 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:36.886 19:16:35 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.VVxCyb 00:07:36.886 19:16:35 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:36.886 19:16:35 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:36.886 19:16:35 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:36.886 19:16:35 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VVxCyb/tests/target /tmp/spdk.VVxCyb 00:07:36.886 19:16:35 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:36.886 19:16:35 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:36.886 19:16:35 -- common/autotest_common.sh@328 -- # df -T 00:07:36.886 19:16:35 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:36.886 19:16:35 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:07:36.886 19:16:35 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:36.886 19:16:35 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:07:36.886 19:16:35 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:07:36.886 19:16:35 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:36.886 19:16:35 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:36.886 19:16:35 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:07:36.886 19:16:35 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:07:36.886 19:16:35 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:07:36.886 19:16:35 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:07:36.886 19:16:35 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:07:36.886 19:16:35 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:36.886 19:16:35 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:07:36.886 19:16:35 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:07:36.886 19:16:35 -- common/autotest_common.sh@363 -- # avails["$mount"]=54008709120 00:07:36.886 19:16:35 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61988528128 00:07:36.886 19:16:35 -- common/autotest_common.sh@364 -- # uses["$mount"]=7979819008 00:07:36.886 19:16:35 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:36.886 19:16:35 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:36.887 19:16:35 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:36.887 19:16:35 -- common/autotest_common.sh@363 -- # avails["$mount"]=30993006592 00:07:36.887 19:16:35 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30994264064 00:07:36.887 19:16:35 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:36.887 19:16:35 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:36.887 19:16:35 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:36.887 19:16:35 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:36.887 19:16:35 -- common/autotest_common.sh@363 -- # avails["$mount"]=12388921344 00:07:36.887 19:16:35 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12397707264 00:07:36.887 19:16:35 -- common/autotest_common.sh@364 -- # uses["$mount"]=8785920 00:07:36.887 19:16:35 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:36.887 19:16:35 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:36.887 19:16:35 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:36.887 19:16:35 -- common/autotest_common.sh@363 -- # avails["$mount"]=30993903616 00:07:36.887 19:16:35 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30994264064 00:07:36.887 19:16:35 -- common/autotest_common.sh@364 -- # uses["$mount"]=360448 00:07:36.887 19:16:35 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:36.887 19:16:35 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:36.887 19:16:35 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:36.887 19:16:35 -- common/autotest_common.sh@363 -- # avails["$mount"]=6198837248 00:07:36.887 19:16:35 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6198849536 00:07:36.887 19:16:35 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:36.887 19:16:35 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:36.887 19:16:35 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:36.887 * Looking for test storage... 00:07:36.887 19:16:35 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:36.887 19:16:35 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:36.887 19:16:35 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.887 19:16:35 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:37.146 19:16:35 -- common/autotest_common.sh@373 -- # mount=/ 00:07:37.146 19:16:35 -- common/autotest_common.sh@375 -- # target_space=54008709120 00:07:37.146 19:16:35 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:37.146 19:16:35 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:37.146 19:16:35 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:07:37.146 19:16:35 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:07:37.146 19:16:35 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:07:37.146 19:16:35 -- common/autotest_common.sh@382 -- # new_size=10194411520 00:07:37.146 19:16:35 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:37.146 19:16:35 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.146 19:16:35 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.146 19:16:35 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.146 19:16:35 -- common/autotest_common.sh@390 -- # return 0 00:07:37.146 19:16:35 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:37.146 19:16:35 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:37.146 19:16:35 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:37.146 19:16:35 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:37.146 19:16:35 -- common/autotest_common.sh@1682 -- # true 00:07:37.146 19:16:35 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:37.146 19:16:35 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:37.146 19:16:35 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:37.146 19:16:35 -- common/autotest_common.sh@27 -- # exec 00:07:37.146 19:16:35 -- common/autotest_common.sh@29 -- # exec 00:07:37.146 19:16:35 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:37.146 19:16:35 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:37.146 19:16:35 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:37.146 19:16:35 -- common/autotest_common.sh@18 -- # set -x 00:07:37.146 19:16:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:37.146 19:16:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:37.146 19:16:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:37.146 19:16:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:37.146 19:16:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:37.146 19:16:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:37.146 19:16:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:37.146 19:16:35 -- scripts/common.sh@335 -- # IFS=.-: 00:07:37.146 19:16:35 -- scripts/common.sh@335 -- # read -ra ver1 00:07:37.146 19:16:35 -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.146 19:16:35 -- scripts/common.sh@336 -- # read -ra ver2 00:07:37.146 19:16:35 -- scripts/common.sh@337 -- # local 'op=<' 00:07:37.146 19:16:35 -- scripts/common.sh@339 -- # ver1_l=2 00:07:37.146 19:16:35 -- scripts/common.sh@340 -- # ver2_l=1 00:07:37.146 19:16:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:37.146 19:16:35 -- scripts/common.sh@343 -- # case "$op" in 00:07:37.146 19:16:35 -- scripts/common.sh@344 -- # : 1 00:07:37.146 19:16:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:37.146 19:16:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.146 19:16:35 -- scripts/common.sh@364 -- # decimal 1 00:07:37.146 19:16:35 -- scripts/common.sh@352 -- # local d=1 00:07:37.146 19:16:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.146 19:16:35 -- scripts/common.sh@354 -- # echo 1 00:07:37.146 19:16:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:37.146 19:16:35 -- scripts/common.sh@365 -- # decimal 2 00:07:37.147 19:16:35 -- scripts/common.sh@352 -- # local d=2 00:07:37.147 19:16:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.147 19:16:35 -- scripts/common.sh@354 -- # echo 2 00:07:37.147 19:16:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:37.147 19:16:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:37.147 19:16:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:37.147 19:16:35 -- scripts/common.sh@367 -- # return 0 00:07:37.147 19:16:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.147 19:16:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:37.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.147 --rc genhtml_branch_coverage=1 00:07:37.147 --rc genhtml_function_coverage=1 00:07:37.147 --rc genhtml_legend=1 00:07:37.147 --rc geninfo_all_blocks=1 00:07:37.147 --rc geninfo_unexecuted_blocks=1 00:07:37.147 00:07:37.147 ' 00:07:37.147 19:16:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:37.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.147 --rc genhtml_branch_coverage=1 00:07:37.147 --rc genhtml_function_coverage=1 00:07:37.147 --rc genhtml_legend=1 00:07:37.147 --rc geninfo_all_blocks=1 00:07:37.147 --rc geninfo_unexecuted_blocks=1 00:07:37.147 00:07:37.147 ' 00:07:37.147 19:16:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:37.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.147 --rc genhtml_branch_coverage=1 00:07:37.147 --rc genhtml_function_coverage=1 00:07:37.147 --rc genhtml_legend=1 00:07:37.147 --rc geninfo_all_blocks=1 00:07:37.147 --rc geninfo_unexecuted_blocks=1 00:07:37.147 00:07:37.147 ' 00:07:37.147 19:16:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:37.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.147 --rc genhtml_branch_coverage=1 00:07:37.147 --rc genhtml_function_coverage=1 00:07:37.147 --rc genhtml_legend=1 00:07:37.147 --rc geninfo_all_blocks=1 00:07:37.147 --rc geninfo_unexecuted_blocks=1 00:07:37.147 00:07:37.147 ' 00:07:37.147 19:16:35 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.147 19:16:35 -- nvmf/common.sh@7 -- # uname -s 00:07:37.147 19:16:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.147 19:16:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.147 19:16:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.147 19:16:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.147 19:16:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.147 19:16:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.147 19:16:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.147 19:16:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.147 19:16:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.147 19:16:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.147 19:16:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.147 19:16:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.147 19:16:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.147 19:16:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.147 19:16:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.147 19:16:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.147 19:16:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.147 19:16:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.147 19:16:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.147 19:16:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.147 19:16:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.147 19:16:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.147 19:16:35 -- paths/export.sh@5 -- # export PATH 00:07:37.147 19:16:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.147 19:16:35 -- nvmf/common.sh@46 -- # : 0 00:07:37.147 19:16:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:37.147 19:16:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:37.147 19:16:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:37.147 19:16:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.147 19:16:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.147 19:16:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:37.147 19:16:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:37.147 19:16:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:37.147 19:16:35 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:37.147 19:16:35 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:37.147 19:16:35 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:37.147 19:16:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:37.147 19:16:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.147 19:16:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:37.147 19:16:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:37.147 19:16:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:37.147 19:16:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.147 19:16:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.147 19:16:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.147 19:16:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:37.147 19:16:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:37.147 19:16:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:37.147 19:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:39.050 19:16:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:39.050 19:16:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:39.050 19:16:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:39.050 19:16:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:39.050 19:16:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:39.050 19:16:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:39.050 19:16:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:39.050 19:16:37 -- nvmf/common.sh@294 -- # net_devs=() 00:07:39.050 19:16:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:39.050 19:16:37 -- nvmf/common.sh@295 -- # e810=() 00:07:39.050 19:16:37 -- nvmf/common.sh@295 -- # local -ga e810 00:07:39.050 19:16:37 -- nvmf/common.sh@296 -- # x722=() 00:07:39.050 19:16:37 -- nvmf/common.sh@296 -- # local -ga x722 00:07:39.050 19:16:37 -- nvmf/common.sh@297 -- # mlx=() 00:07:39.050 19:16:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:39.050 19:16:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.050 19:16:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:39.050 19:16:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:39.050 19:16:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:39.050 19:16:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:39.050 19:16:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:39.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:39.050 19:16:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:39.050 19:16:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:39.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:39.050 19:16:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:39.050 19:16:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:39.050 19:16:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.050 19:16:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:39.050 19:16:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.050 19:16:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:39.050 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:39.050 19:16:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.050 19:16:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:39.050 19:16:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.050 19:16:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:39.050 19:16:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.050 19:16:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:39.050 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:39.050 19:16:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.050 19:16:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:39.050 19:16:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:39.050 19:16:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:39.050 19:16:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:39.050 19:16:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.050 19:16:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.050 19:16:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.050 19:16:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:39.050 19:16:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.050 19:16:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.050 19:16:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:39.050 19:16:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.050 19:16:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.050 19:16:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:39.050 19:16:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:39.050 19:16:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.050 19:16:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.309 19:16:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.309 19:16:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.309 19:16:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:39.309 19:16:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.309 19:16:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.309 19:16:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.309 19:16:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:39.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:07:39.309 00:07:39.309 --- 10.0.0.2 ping statistics --- 00:07:39.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.309 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:39.309 19:16:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:07:39.309 00:07:39.309 --- 10.0.0.1 ping statistics --- 00:07:39.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.309 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:39.309 19:16:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.309 19:16:37 -- nvmf/common.sh@410 -- # return 0 00:07:39.309 19:16:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:39.309 19:16:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.309 19:16:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:39.309 19:16:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:39.309 19:16:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.309 19:16:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:39.309 19:16:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:39.309 19:16:37 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:39.309 19:16:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:39.309 19:16:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.309 19:16:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.309 ************************************ 00:07:39.309 START TEST nvmf_filesystem_no_in_capsule 00:07:39.309 ************************************ 00:07:39.309 19:16:37 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:39.309 19:16:37 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:39.309 19:16:37 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:39.309 19:16:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:39.309 19:16:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.309 19:16:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.309 19:16:37 -- nvmf/common.sh@469 -- # nvmfpid=1097424 00:07:39.309 19:16:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.309 19:16:37 -- nvmf/common.sh@470 -- # waitforlisten 1097424 00:07:39.309 19:16:37 -- common/autotest_common.sh@829 -- # '[' -z 1097424 ']' 00:07:39.309 19:16:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.309 19:16:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.309 19:16:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.309 19:16:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.309 19:16:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.309 [2024-11-17 19:16:37.489060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.309 [2024-11-17 19:16:37.489146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.309 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.309 [2024-11-17 19:16:37.555068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.568 [2024-11-17 19:16:37.649451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.568 [2024-11-17 19:16:37.649616] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.568 [2024-11-17 19:16:37.649633] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.568 [2024-11-17 19:16:37.649646] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.568 [2024-11-17 19:16:37.649726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.568 [2024-11-17 19:16:37.649783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.568 [2024-11-17 19:16:37.650499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.568 [2024-11-17 19:16:37.650503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.502 19:16:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.502 19:16:38 -- common/autotest_common.sh@862 -- # return 0 00:07:40.502 19:16:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:40.502 19:16:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.502 19:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.502 19:16:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.502 19:16:38 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:40.502 19:16:38 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:40.502 19:16:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.502 19:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.502 [2024-11-17 19:16:38.511504] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.502 19:16:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.502 19:16:38 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:40.502 19:16:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.502 19:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.502 Malloc1 00:07:40.502 19:16:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.502 19:16:38 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.502 19:16:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.502 19:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.502 19:16:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.502 19:16:38 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.502 19:16:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.502 19:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.502 19:16:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.502 19:16:38 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.502 19:16:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.502 19:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.502 [2024-11-17 19:16:38.697058] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.502 19:16:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.502 19:16:38 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:40.502 19:16:38 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:40.502 19:16:38 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:40.502 19:16:38 -- common/autotest_common.sh@1369 -- # local bs 00:07:40.502 19:16:38 -- common/autotest_common.sh@1370 -- # local nb 00:07:40.502 19:16:38 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:40.502 19:16:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.502 19:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.502 19:16:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.502 19:16:38 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:40.502 { 00:07:40.502 "name": "Malloc1", 00:07:40.502 "aliases": [ 00:07:40.502 "c58e256b-eb5f-4531-8e25-e3f07149f28a" 00:07:40.502 ], 00:07:40.502 "product_name": "Malloc disk", 00:07:40.502 "block_size": 512, 00:07:40.502 "num_blocks": 1048576, 00:07:40.502 "uuid": "c58e256b-eb5f-4531-8e25-e3f07149f28a", 00:07:40.502 "assigned_rate_limits": { 00:07:40.502 "rw_ios_per_sec": 0, 00:07:40.502 "rw_mbytes_per_sec": 0, 00:07:40.502 "r_mbytes_per_sec": 0, 00:07:40.502 "w_mbytes_per_sec": 0 00:07:40.502 }, 00:07:40.502 "claimed": true, 00:07:40.502 "claim_type": "exclusive_write", 00:07:40.502 "zoned": false, 00:07:40.502 "supported_io_types": { 00:07:40.502 "read": true, 00:07:40.502 "write": true, 00:07:40.502 "unmap": true, 00:07:40.502 "write_zeroes": true, 00:07:40.502 "flush": true, 00:07:40.502 "reset": true, 00:07:40.502 "compare": false, 00:07:40.502 "compare_and_write": false, 00:07:40.502 "abort": true, 00:07:40.502 "nvme_admin": false, 00:07:40.502 "nvme_io": false 00:07:40.502 }, 00:07:40.502 "memory_domains": [ 00:07:40.502 { 00:07:40.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.502 "dma_device_type": 2 00:07:40.502 } 00:07:40.502 ], 00:07:40.502 "driver_specific": {} 00:07:40.502 } 00:07:40.502 ]' 00:07:40.502 19:16:38 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:40.502 19:16:38 -- common/autotest_common.sh@1372 -- # bs=512 00:07:40.502 19:16:38 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:40.760 19:16:38 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:40.760 19:16:38 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:40.760 19:16:38 -- common/autotest_common.sh@1377 -- # echo 512 00:07:40.760 19:16:38 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:40.760 19:16:38 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.324 19:16:39 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.324 19:16:39 -- common/autotest_common.sh@1187 -- # local i=0 00:07:41.324 19:16:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.324 19:16:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:41.324 19:16:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:43.220 19:16:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:43.220 19:16:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:43.220 19:16:41 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:43.220 19:16:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:43.220 19:16:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:43.220 19:16:41 -- common/autotest_common.sh@1197 -- # return 0 00:07:43.220 19:16:41 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:43.220 19:16:41 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:43.220 19:16:41 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:43.220 19:16:41 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:43.220 19:16:41 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:43.220 19:16:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:43.220 19:16:41 -- setup/common.sh@80 -- # echo 536870912 00:07:43.220 19:16:41 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:43.220 19:16:41 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:43.220 19:16:41 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:43.220 19:16:41 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:43.786 19:16:41 -- target/filesystem.sh@69 -- # partprobe 00:07:44.718 19:16:42 -- target/filesystem.sh@70 -- # sleep 1 00:07:45.652 19:16:43 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:45.652 19:16:43 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:45.652 19:16:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:45.652 19:16:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.652 19:16:43 -- common/autotest_common.sh@10 -- # set +x 00:07:45.652 ************************************ 00:07:45.652 START TEST filesystem_ext4 00:07:45.652 ************************************ 00:07:45.652 19:16:43 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:45.652 19:16:43 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:45.652 19:16:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.652 19:16:43 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:45.652 19:16:43 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:45.652 19:16:43 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:45.652 19:16:43 -- common/autotest_common.sh@914 -- # local i=0 00:07:45.652 19:16:43 -- common/autotest_common.sh@915 -- # local force 00:07:45.652 19:16:43 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:45.652 19:16:43 -- common/autotest_common.sh@918 -- # force=-F 00:07:45.652 19:16:43 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:45.652 mke2fs 1.47.0 (5-Feb-2023) 00:07:45.652 Discarding device blocks: 0/522240 done 00:07:45.652 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:45.652 Filesystem UUID: d05c9d17-9963-45a1-8ef3-6400e37ea2a4 00:07:45.652 Superblock backups stored on blocks: 00:07:45.652 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:45.652 00:07:45.652 Allocating group tables: 0/64 done 00:07:45.652 Writing inode tables: 0/64 done 00:07:48.930 Creating journal (8192 blocks): done 00:07:50.795 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:07:50.795 00:07:50.795 19:16:48 -- common/autotest_common.sh@931 -- # return 0 00:07:50.795 19:16:48 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.356 19:16:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.356 19:16:54 -- target/filesystem.sh@25 -- # sync 00:07:57.356 19:16:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.356 19:16:54 -- target/filesystem.sh@27 -- # sync 00:07:57.356 19:16:54 -- target/filesystem.sh@29 -- # i=0 00:07:57.356 19:16:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.356 19:16:54 -- target/filesystem.sh@37 -- # kill -0 1097424 00:07:57.356 19:16:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.356 19:16:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.356 19:16:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.356 19:16:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.356 00:07:57.356 real 0m11.169s 00:07:57.356 user 0m0.020s 00:07:57.356 sys 0m0.076s 00:07:57.356 19:16:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.356 19:16:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.356 ************************************ 00:07:57.356 END TEST filesystem_ext4 00:07:57.356 ************************************ 00:07:57.356 19:16:54 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:57.356 19:16:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:57.356 19:16:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.356 19:16:54 -- common/autotest_common.sh@10 -- # set +x 00:07:57.356 ************************************ 00:07:57.356 START TEST filesystem_btrfs 00:07:57.356 ************************************ 00:07:57.356 19:16:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:57.356 19:16:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:57.356 19:16:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.356 19:16:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:57.356 19:16:54 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:57.356 19:16:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:57.356 19:16:54 -- common/autotest_common.sh@914 -- # local i=0 00:07:57.356 19:16:54 -- common/autotest_common.sh@915 -- # local force 00:07:57.356 19:16:54 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:57.356 19:16:54 -- common/autotest_common.sh@920 -- # force=-f 00:07:57.356 19:16:54 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:57.356 btrfs-progs v6.8.1 00:07:57.356 See https://btrfs.readthedocs.io for more information. 00:07:57.356 00:07:57.356 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:57.356 NOTE: several default settings have changed in version 5.15, please make sure 00:07:57.356 this does not affect your deployments: 00:07:57.356 - DUP for metadata (-m dup) 00:07:57.356 - enabled no-holes (-O no-holes) 00:07:57.356 - enabled free-space-tree (-R free-space-tree) 00:07:57.356 00:07:57.356 Label: (null) 00:07:57.356 UUID: 7acd8eb6-ff5b-42ee-b617-11ee330ad324 00:07:57.356 Node size: 16384 00:07:57.356 Sector size: 4096 (CPU page size: 4096) 00:07:57.356 Filesystem size: 510.00MiB 00:07:57.356 Block group profiles: 00:07:57.356 Data: single 8.00MiB 00:07:57.356 Metadata: DUP 32.00MiB 00:07:57.356 System: DUP 8.00MiB 00:07:57.356 SSD detected: yes 00:07:57.356 Zoned device: no 00:07:57.356 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:57.356 Checksum: crc32c 00:07:57.356 Number of devices: 1 00:07:57.356 Devices: 00:07:57.356 ID SIZE PATH 00:07:57.356 1 510.00MiB /dev/nvme0n1p1 00:07:57.356 00:07:57.356 19:16:55 -- common/autotest_common.sh@931 -- # return 0 00:07:57.356 19:16:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.924 19:16:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.924 19:16:55 -- target/filesystem.sh@25 -- # sync 00:07:57.924 19:16:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.924 19:16:55 -- target/filesystem.sh@27 -- # sync 00:07:57.924 19:16:55 -- target/filesystem.sh@29 -- # i=0 00:07:57.924 19:16:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.924 19:16:55 -- target/filesystem.sh@37 -- # kill -0 1097424 00:07:57.924 19:16:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.924 19:16:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.924 19:16:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.924 19:16:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.924 00:07:57.924 real 0m1.054s 00:07:57.924 user 0m0.019s 00:07:57.924 sys 0m0.099s 00:07:57.924 19:16:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.924 19:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.924 ************************************ 00:07:57.924 END TEST filesystem_btrfs 00:07:57.924 ************************************ 00:07:57.924 19:16:55 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:57.924 19:16:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:57.924 19:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.924 19:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.924 ************************************ 00:07:57.924 START TEST filesystem_xfs 00:07:57.924 ************************************ 00:07:57.924 19:16:55 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:57.924 19:16:55 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:57.924 19:16:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.924 19:16:55 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:57.924 19:16:55 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:57.924 19:16:55 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:57.924 19:16:55 -- common/autotest_common.sh@914 -- # local i=0 00:07:57.924 19:16:55 -- common/autotest_common.sh@915 -- # local force 00:07:57.924 19:16:55 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:57.924 19:16:55 -- common/autotest_common.sh@920 -- # force=-f 00:07:57.924 19:16:55 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:57.924 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:57.924 = sectsz=512 attr=2, projid32bit=1 00:07:57.924 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:57.924 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:57.924 data = bsize=4096 blocks=130560, imaxpct=25 00:07:57.924 = sunit=0 swidth=0 blks 00:07:57.924 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:57.924 log =internal log bsize=4096 blocks=16384, version=2 00:07:57.924 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:57.924 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:58.859 Discarding blocks...Done. 00:07:58.859 19:16:57 -- common/autotest_common.sh@931 -- # return 0 00:07:58.859 19:16:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:01.388 19:16:59 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:01.388 19:16:59 -- target/filesystem.sh@25 -- # sync 00:08:01.388 19:16:59 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:01.388 19:16:59 -- target/filesystem.sh@27 -- # sync 00:08:01.388 19:16:59 -- target/filesystem.sh@29 -- # i=0 00:08:01.388 19:16:59 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:01.388 19:16:59 -- target/filesystem.sh@37 -- # kill -0 1097424 00:08:01.388 19:16:59 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:01.388 19:16:59 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:01.388 19:16:59 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:01.388 19:16:59 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:01.388 00:08:01.388 real 0m3.610s 00:08:01.388 user 0m0.021s 00:08:01.388 sys 0m0.061s 00:08:01.388 19:16:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.388 19:16:59 -- common/autotest_common.sh@10 -- # set +x 00:08:01.388 ************************************ 00:08:01.388 END TEST filesystem_xfs 00:08:01.388 ************************************ 00:08:01.388 19:16:59 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:01.647 19:16:59 -- target/filesystem.sh@93 -- # sync 00:08:01.647 19:16:59 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:01.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.647 19:16:59 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:01.647 19:16:59 -- common/autotest_common.sh@1208 -- # local i=0 00:08:01.647 19:16:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:01.647 19:16:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.905 19:16:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:01.905 19:16:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.905 19:16:59 -- common/autotest_common.sh@1220 -- # return 0 00:08:01.905 19:16:59 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.905 19:16:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.905 19:16:59 -- common/autotest_common.sh@10 -- # set +x 00:08:01.905 19:16:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.905 19:16:59 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:01.905 19:16:59 -- target/filesystem.sh@101 -- # killprocess 1097424 00:08:01.905 19:16:59 -- common/autotest_common.sh@936 -- # '[' -z 1097424 ']' 00:08:01.905 19:16:59 -- common/autotest_common.sh@940 -- # kill -0 1097424 00:08:01.905 19:16:59 -- common/autotest_common.sh@941 -- # uname 00:08:01.905 19:16:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:01.905 19:16:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1097424 00:08:01.905 19:16:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:01.905 19:16:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:01.905 19:16:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1097424' 00:08:01.905 killing process with pid 1097424 00:08:01.905 19:16:59 -- common/autotest_common.sh@955 -- # kill 1097424 00:08:01.905 19:16:59 -- common/autotest_common.sh@960 -- # wait 1097424 00:08:02.165 19:17:00 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:02.165 00:08:02.165 real 0m22.970s 00:08:02.165 user 1m29.134s 00:08:02.165 sys 0m2.729s 00:08:02.165 19:17:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.165 19:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.165 ************************************ 00:08:02.165 END TEST nvmf_filesystem_no_in_capsule 00:08:02.165 ************************************ 00:08:02.424 19:17:00 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:02.424 19:17:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.424 19:17:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.424 19:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.424 ************************************ 00:08:02.424 START TEST nvmf_filesystem_in_capsule 00:08:02.424 ************************************ 00:08:02.424 19:17:00 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:02.424 19:17:00 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:02.424 19:17:00 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:02.424 19:17:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:02.424 19:17:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:02.424 19:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.424 19:17:00 -- nvmf/common.sh@469 -- # nvmfpid=1100482 00:08:02.424 19:17:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.424 19:17:00 -- nvmf/common.sh@470 -- # waitforlisten 1100482 00:08:02.424 19:17:00 -- common/autotest_common.sh@829 -- # '[' -z 1100482 ']' 00:08:02.424 19:17:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.424 19:17:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.424 19:17:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.424 19:17:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.424 19:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.424 [2024-11-17 19:17:00.495900] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.424 [2024-11-17 19:17:00.495981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.424 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.424 [2024-11-17 19:17:00.564471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.424 [2024-11-17 19:17:00.653194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:02.424 [2024-11-17 19:17:00.653356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.424 [2024-11-17 19:17:00.653376] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.424 [2024-11-17 19:17:00.653390] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.424 [2024-11-17 19:17:00.653521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.424 [2024-11-17 19:17:00.653589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.424 [2024-11-17 19:17:00.653613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.424 [2024-11-17 19:17:00.653616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.360 19:17:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.360 19:17:01 -- common/autotest_common.sh@862 -- # return 0 00:08:03.360 19:17:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:03.360 19:17:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.360 19:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.360 19:17:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.360 19:17:01 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:03.360 19:17:01 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:03.360 19:17:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.360 19:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.360 [2024-11-17 19:17:01.500379] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.360 19:17:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.360 19:17:01 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:03.360 19:17:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.360 19:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 Malloc1 00:08:03.620 19:17:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.620 19:17:01 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:03.620 19:17:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.620 19:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 19:17:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.620 19:17:01 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:03.620 19:17:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.620 19:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 19:17:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.620 19:17:01 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.620 19:17:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.620 19:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 [2024-11-17 19:17:01.691121] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.620 19:17:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.620 19:17:01 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:03.620 19:17:01 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:03.620 19:17:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:03.620 19:17:01 -- common/autotest_common.sh@1369 -- # local bs 00:08:03.620 19:17:01 -- common/autotest_common.sh@1370 -- # local nb 00:08:03.620 19:17:01 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:03.620 19:17:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.620 19:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 19:17:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.620 19:17:01 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:03.620 { 00:08:03.620 "name": "Malloc1", 00:08:03.620 "aliases": [ 00:08:03.620 "c6240aca-c63d-4796-b664-99be2687c411" 00:08:03.620 ], 00:08:03.620 "product_name": "Malloc disk", 00:08:03.620 "block_size": 512, 00:08:03.620 "num_blocks": 1048576, 00:08:03.620 "uuid": "c6240aca-c63d-4796-b664-99be2687c411", 00:08:03.620 "assigned_rate_limits": { 00:08:03.620 "rw_ios_per_sec": 0, 00:08:03.620 "rw_mbytes_per_sec": 0, 00:08:03.620 "r_mbytes_per_sec": 0, 00:08:03.620 "w_mbytes_per_sec": 0 00:08:03.620 }, 00:08:03.620 "claimed": true, 00:08:03.620 "claim_type": "exclusive_write", 00:08:03.620 "zoned": false, 00:08:03.620 "supported_io_types": { 00:08:03.620 "read": true, 00:08:03.620 "write": true, 00:08:03.620 "unmap": true, 00:08:03.620 "write_zeroes": true, 00:08:03.620 "flush": true, 00:08:03.620 "reset": true, 00:08:03.620 "compare": false, 00:08:03.620 "compare_and_write": false, 00:08:03.620 "abort": true, 00:08:03.620 "nvme_admin": false, 00:08:03.620 "nvme_io": false 00:08:03.620 }, 00:08:03.620 "memory_domains": [ 00:08:03.620 { 00:08:03.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.620 "dma_device_type": 2 00:08:03.620 } 00:08:03.620 ], 00:08:03.620 "driver_specific": {} 00:08:03.620 } 00:08:03.620 ]' 00:08:03.620 19:17:01 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:03.620 19:17:01 -- common/autotest_common.sh@1372 -- # bs=512 00:08:03.620 19:17:01 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:03.620 19:17:01 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:03.620 19:17:01 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:03.620 19:17:01 -- common/autotest_common.sh@1377 -- # echo 512 00:08:03.620 19:17:01 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:03.620 19:17:01 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:04.222 19:17:02 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.222 19:17:02 -- common/autotest_common.sh@1187 -- # local i=0 00:08:04.222 19:17:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.222 19:17:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:04.222 19:17:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:06.150 19:17:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:06.150 19:17:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:06.150 19:17:04 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.150 19:17:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:06.150 19:17:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.150 19:17:04 -- common/autotest_common.sh@1197 -- # return 0 00:08:06.150 19:17:04 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:06.150 19:17:04 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:06.408 19:17:04 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:06.408 19:17:04 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:06.408 19:17:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:06.408 19:17:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:06.408 19:17:04 -- setup/common.sh@80 -- # echo 536870912 00:08:06.408 19:17:04 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:06.408 19:17:04 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:06.408 19:17:04 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:06.408 19:17:04 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:06.667 19:17:04 -- target/filesystem.sh@69 -- # partprobe 00:08:07.233 19:17:05 -- target/filesystem.sh@70 -- # sleep 1 00:08:08.169 19:17:06 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:08.169 19:17:06 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:08.169 19:17:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:08.169 19:17:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.169 19:17:06 -- common/autotest_common.sh@10 -- # set +x 00:08:08.169 ************************************ 00:08:08.169 START TEST filesystem_in_capsule_ext4 00:08:08.169 ************************************ 00:08:08.169 19:17:06 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:08.169 19:17:06 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:08.169 19:17:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.169 19:17:06 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:08.169 19:17:06 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:08.169 19:17:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:08.169 19:17:06 -- common/autotest_common.sh@914 -- # local i=0 00:08:08.169 19:17:06 -- common/autotest_common.sh@915 -- # local force 00:08:08.169 19:17:06 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:08.169 19:17:06 -- common/autotest_common.sh@918 -- # force=-F 00:08:08.169 19:17:06 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:08.169 mke2fs 1.47.0 (5-Feb-2023) 00:08:08.169 Discarding device blocks: 0/522240 done 00:08:08.169 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:08.169 Filesystem UUID: 482cf29e-3925-4050-b724-e5833e79e32f 00:08:08.169 Superblock backups stored on blocks: 00:08:08.169 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:08.169 00:08:08.169 Allocating group tables: 0/64 done 00:08:08.169 Writing inode tables: 0/64 done 00:08:09.104 Creating journal (8192 blocks): done 00:08:09.104 Writing superblocks and filesystem accounting information: 0/64 done 00:08:09.104 00:08:09.104 19:17:07 -- common/autotest_common.sh@931 -- # return 0 00:08:09.104 19:17:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.365 19:17:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.365 19:17:12 -- target/filesystem.sh@25 -- # sync 00:08:14.365 19:17:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.365 19:17:12 -- target/filesystem.sh@27 -- # sync 00:08:14.365 19:17:12 -- target/filesystem.sh@29 -- # i=0 00:08:14.365 19:17:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.365 19:17:12 -- target/filesystem.sh@37 -- # kill -0 1100482 00:08:14.365 19:17:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.365 19:17:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.365 19:17:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.365 19:17:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.365 00:08:14.365 real 0m6.210s 00:08:14.365 user 0m0.015s 00:08:14.365 sys 0m0.070s 00:08:14.365 19:17:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.365 19:17:12 -- common/autotest_common.sh@10 -- # set +x 00:08:14.365 ************************************ 00:08:14.365 END TEST filesystem_in_capsule_ext4 00:08:14.365 ************************************ 00:08:14.365 19:17:12 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:14.365 19:17:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:14.365 19:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.365 19:17:12 -- common/autotest_common.sh@10 -- # set +x 00:08:14.365 ************************************ 00:08:14.365 START TEST filesystem_in_capsule_btrfs 00:08:14.365 ************************************ 00:08:14.365 19:17:12 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:14.365 19:17:12 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:14.365 19:17:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.365 19:17:12 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:14.365 19:17:12 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:14.365 19:17:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:14.365 19:17:12 -- common/autotest_common.sh@914 -- # local i=0 00:08:14.365 19:17:12 -- common/autotest_common.sh@915 -- # local force 00:08:14.365 19:17:12 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:14.365 19:17:12 -- common/autotest_common.sh@920 -- # force=-f 00:08:14.365 19:17:12 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:14.623 btrfs-progs v6.8.1 00:08:14.623 See https://btrfs.readthedocs.io for more information. 00:08:14.623 00:08:14.623 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:14.623 NOTE: several default settings have changed in version 5.15, please make sure 00:08:14.623 this does not affect your deployments: 00:08:14.623 - DUP for metadata (-m dup) 00:08:14.623 - enabled no-holes (-O no-holes) 00:08:14.624 - enabled free-space-tree (-R free-space-tree) 00:08:14.624 00:08:14.624 Label: (null) 00:08:14.624 UUID: 28b7f54f-e6c0-4f17-a6af-e00727bd0697 00:08:14.624 Node size: 16384 00:08:14.624 Sector size: 4096 (CPU page size: 4096) 00:08:14.624 Filesystem size: 510.00MiB 00:08:14.624 Block group profiles: 00:08:14.624 Data: single 8.00MiB 00:08:14.624 Metadata: DUP 32.00MiB 00:08:14.624 System: DUP 8.00MiB 00:08:14.624 SSD detected: yes 00:08:14.624 Zoned device: no 00:08:14.624 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:14.624 Checksum: crc32c 00:08:14.624 Number of devices: 1 00:08:14.624 Devices: 00:08:14.624 ID SIZE PATH 00:08:14.624 1 510.00MiB /dev/nvme0n1p1 00:08:14.624 00:08:14.624 19:17:12 -- common/autotest_common.sh@931 -- # return 0 00:08:14.624 19:17:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.560 19:17:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.560 19:17:13 -- target/filesystem.sh@25 -- # sync 00:08:15.560 19:17:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.560 19:17:13 -- target/filesystem.sh@27 -- # sync 00:08:15.560 19:17:13 -- target/filesystem.sh@29 -- # i=0 00:08:15.560 19:17:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.820 19:17:13 -- target/filesystem.sh@37 -- # kill -0 1100482 00:08:15.820 19:17:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.820 19:17:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.820 19:17:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.820 19:17:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.820 00:08:15.820 real 0m1.377s 00:08:15.820 user 0m0.015s 00:08:15.820 sys 0m0.105s 00:08:15.820 19:17:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.820 19:17:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.820 ************************************ 00:08:15.820 END TEST filesystem_in_capsule_btrfs 00:08:15.820 ************************************ 00:08:15.820 19:17:13 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:15.820 19:17:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:15.820 19:17:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.820 19:17:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.820 ************************************ 00:08:15.820 START TEST filesystem_in_capsule_xfs 00:08:15.820 ************************************ 00:08:15.820 19:17:13 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:15.820 19:17:13 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:15.820 19:17:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.820 19:17:13 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:15.820 19:17:13 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:15.820 19:17:13 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:15.820 19:17:13 -- common/autotest_common.sh@914 -- # local i=0 00:08:15.820 19:17:13 -- common/autotest_common.sh@915 -- # local force 00:08:15.820 19:17:13 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:15.820 19:17:13 -- common/autotest_common.sh@920 -- # force=-f 00:08:15.820 19:17:13 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:16.079 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:16.079 = sectsz=512 attr=2, projid32bit=1 00:08:16.079 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:16.079 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:16.079 data = bsize=4096 blocks=130560, imaxpct=25 00:08:16.079 = sunit=0 swidth=0 blks 00:08:16.080 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:16.080 log =internal log bsize=4096 blocks=16384, version=2 00:08:16.080 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:16.080 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:17.017 Discarding blocks...Done. 00:08:17.017 19:17:15 -- common/autotest_common.sh@931 -- # return 0 00:08:17.017 19:17:15 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.554 19:17:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.554 19:17:17 -- target/filesystem.sh@25 -- # sync 00:08:19.554 19:17:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.554 19:17:17 -- target/filesystem.sh@27 -- # sync 00:08:19.554 19:17:17 -- target/filesystem.sh@29 -- # i=0 00:08:19.554 19:17:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.554 19:17:17 -- target/filesystem.sh@37 -- # kill -0 1100482 00:08:19.554 19:17:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.554 19:17:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.554 19:17:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.554 19:17:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.554 00:08:19.554 real 0m3.770s 00:08:19.554 user 0m0.016s 00:08:19.554 sys 0m0.064s 00:08:19.554 19:17:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.554 19:17:17 -- common/autotest_common.sh@10 -- # set +x 00:08:19.554 ************************************ 00:08:19.554 END TEST filesystem_in_capsule_xfs 00:08:19.554 ************************************ 00:08:19.554 19:17:17 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:19.554 19:17:17 -- target/filesystem.sh@93 -- # sync 00:08:19.554 19:17:17 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.814 19:17:17 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.814 19:17:17 -- common/autotest_common.sh@1208 -- # local i=0 00:08:19.814 19:17:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:19.814 19:17:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.814 19:17:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:19.814 19:17:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.814 19:17:17 -- common/autotest_common.sh@1220 -- # return 0 00:08:19.814 19:17:17 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.814 19:17:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.814 19:17:17 -- common/autotest_common.sh@10 -- # set +x 00:08:19.814 19:17:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.814 19:17:17 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:19.814 19:17:17 -- target/filesystem.sh@101 -- # killprocess 1100482 00:08:19.814 19:17:17 -- common/autotest_common.sh@936 -- # '[' -z 1100482 ']' 00:08:19.814 19:17:17 -- common/autotest_common.sh@940 -- # kill -0 1100482 00:08:19.814 19:17:17 -- common/autotest_common.sh@941 -- # uname 00:08:19.814 19:17:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:19.814 19:17:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1100482 00:08:19.814 19:17:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:19.814 19:17:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:19.814 19:17:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1100482' 00:08:19.814 killing process with pid 1100482 00:08:19.814 19:17:17 -- common/autotest_common.sh@955 -- # kill 1100482 00:08:19.814 19:17:17 -- common/autotest_common.sh@960 -- # wait 1100482 00:08:20.073 19:17:18 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:20.073 00:08:20.073 real 0m17.885s 00:08:20.073 user 1m9.344s 00:08:20.073 sys 0m2.254s 00:08:20.073 19:17:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.073 19:17:18 -- common/autotest_common.sh@10 -- # set +x 00:08:20.073 ************************************ 00:08:20.073 END TEST nvmf_filesystem_in_capsule 00:08:20.073 ************************************ 00:08:20.331 19:17:18 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:20.331 19:17:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:20.331 19:17:18 -- nvmf/common.sh@116 -- # sync 00:08:20.331 19:17:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:20.331 19:17:18 -- nvmf/common.sh@119 -- # set +e 00:08:20.331 19:17:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:20.331 19:17:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:20.331 rmmod nvme_tcp 00:08:20.331 rmmod nvme_fabrics 00:08:20.331 rmmod nvme_keyring 00:08:20.331 19:17:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:20.331 19:17:18 -- nvmf/common.sh@123 -- # set -e 00:08:20.331 19:17:18 -- nvmf/common.sh@124 -- # return 0 00:08:20.331 19:17:18 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:20.331 19:17:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:20.331 19:17:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:20.331 19:17:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:20.331 19:17:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.331 19:17:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:20.331 19:17:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.331 19:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.331 19:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.238 19:17:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:22.238 00:08:22.238 real 0m45.535s 00:08:22.238 user 2m39.559s 00:08:22.238 sys 0m6.587s 00:08:22.238 19:17:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.238 19:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:22.238 ************************************ 00:08:22.238 END TEST nvmf_filesystem 00:08:22.238 ************************************ 00:08:22.238 19:17:20 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:22.238 19:17:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:22.238 19:17:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.238 19:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:22.238 ************************************ 00:08:22.238 START TEST nvmf_discovery 00:08:22.238 ************************************ 00:08:22.238 19:17:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:22.497 * Looking for test storage... 00:08:22.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.497 19:17:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:22.497 19:17:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:22.497 19:17:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:22.497 19:17:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:22.497 19:17:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:22.497 19:17:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:22.497 19:17:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:22.497 19:17:20 -- scripts/common.sh@335 -- # IFS=.-: 00:08:22.498 19:17:20 -- scripts/common.sh@335 -- # read -ra ver1 00:08:22.498 19:17:20 -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.498 19:17:20 -- scripts/common.sh@336 -- # read -ra ver2 00:08:22.498 19:17:20 -- scripts/common.sh@337 -- # local 'op=<' 00:08:22.498 19:17:20 -- scripts/common.sh@339 -- # ver1_l=2 00:08:22.498 19:17:20 -- scripts/common.sh@340 -- # ver2_l=1 00:08:22.498 19:17:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:22.498 19:17:20 -- scripts/common.sh@343 -- # case "$op" in 00:08:22.498 19:17:20 -- scripts/common.sh@344 -- # : 1 00:08:22.498 19:17:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:22.498 19:17:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.498 19:17:20 -- scripts/common.sh@364 -- # decimal 1 00:08:22.498 19:17:20 -- scripts/common.sh@352 -- # local d=1 00:08:22.498 19:17:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.498 19:17:20 -- scripts/common.sh@354 -- # echo 1 00:08:22.498 19:17:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:22.498 19:17:20 -- scripts/common.sh@365 -- # decimal 2 00:08:22.498 19:17:20 -- scripts/common.sh@352 -- # local d=2 00:08:22.498 19:17:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.498 19:17:20 -- scripts/common.sh@354 -- # echo 2 00:08:22.498 19:17:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:22.498 19:17:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:22.498 19:17:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:22.498 19:17:20 -- scripts/common.sh@367 -- # return 0 00:08:22.498 19:17:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.498 19:17:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:22.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.498 --rc genhtml_branch_coverage=1 00:08:22.498 --rc genhtml_function_coverage=1 00:08:22.498 --rc genhtml_legend=1 00:08:22.498 --rc geninfo_all_blocks=1 00:08:22.498 --rc geninfo_unexecuted_blocks=1 00:08:22.498 00:08:22.498 ' 00:08:22.498 19:17:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:22.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.498 --rc genhtml_branch_coverage=1 00:08:22.498 --rc genhtml_function_coverage=1 00:08:22.498 --rc genhtml_legend=1 00:08:22.498 --rc geninfo_all_blocks=1 00:08:22.498 --rc geninfo_unexecuted_blocks=1 00:08:22.498 00:08:22.498 ' 00:08:22.498 19:17:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:22.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.498 --rc genhtml_branch_coverage=1 00:08:22.498 --rc genhtml_function_coverage=1 00:08:22.498 --rc genhtml_legend=1 00:08:22.498 --rc geninfo_all_blocks=1 00:08:22.498 --rc geninfo_unexecuted_blocks=1 00:08:22.498 00:08:22.498 ' 00:08:22.498 19:17:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:22.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.498 --rc genhtml_branch_coverage=1 00:08:22.498 --rc genhtml_function_coverage=1 00:08:22.498 --rc genhtml_legend=1 00:08:22.498 --rc geninfo_all_blocks=1 00:08:22.498 --rc geninfo_unexecuted_blocks=1 00:08:22.498 00:08:22.498 ' 00:08:22.498 19:17:20 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.498 19:17:20 -- nvmf/common.sh@7 -- # uname -s 00:08:22.498 19:17:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.498 19:17:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.498 19:17:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.498 19:17:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.498 19:17:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.498 19:17:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.498 19:17:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.498 19:17:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.498 19:17:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.498 19:17:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.498 19:17:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.498 19:17:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.498 19:17:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.498 19:17:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.498 19:17:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.498 19:17:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.498 19:17:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.498 19:17:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.498 19:17:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.498 19:17:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.498 19:17:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.498 19:17:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.498 19:17:20 -- paths/export.sh@5 -- # export PATH 00:08:22.498 19:17:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.498 19:17:20 -- nvmf/common.sh@46 -- # : 0 00:08:22.498 19:17:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:22.498 19:17:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:22.498 19:17:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:22.498 19:17:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.498 19:17:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.498 19:17:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:22.498 19:17:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:22.498 19:17:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:22.498 19:17:20 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:22.498 19:17:20 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:22.498 19:17:20 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:22.498 19:17:20 -- target/discovery.sh@15 -- # hash nvme 00:08:22.498 19:17:20 -- target/discovery.sh@20 -- # nvmftestinit 00:08:22.498 19:17:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:22.498 19:17:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.498 19:17:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:22.498 19:17:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:22.498 19:17:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:22.498 19:17:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.498 19:17:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.498 19:17:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.498 19:17:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:22.498 19:17:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:22.498 19:17:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:22.498 19:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:25.039 19:17:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:25.039 19:17:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:25.039 19:17:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:25.039 19:17:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:25.039 19:17:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:25.039 19:17:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:25.039 19:17:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:25.039 19:17:22 -- nvmf/common.sh@294 -- # net_devs=() 00:08:25.039 19:17:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:25.039 19:17:22 -- nvmf/common.sh@295 -- # e810=() 00:08:25.039 19:17:22 -- nvmf/common.sh@295 -- # local -ga e810 00:08:25.039 19:17:22 -- nvmf/common.sh@296 -- # x722=() 00:08:25.039 19:17:22 -- nvmf/common.sh@296 -- # local -ga x722 00:08:25.039 19:17:22 -- nvmf/common.sh@297 -- # mlx=() 00:08:25.039 19:17:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:25.039 19:17:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.039 19:17:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:25.039 19:17:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:25.039 19:17:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:25.039 19:17:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:25.039 19:17:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.039 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.039 19:17:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:25.039 19:17:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.039 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.039 19:17:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:25.039 19:17:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:25.039 19:17:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.039 19:17:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:25.039 19:17:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.039 19:17:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.039 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.039 19:17:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.039 19:17:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:25.039 19:17:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.039 19:17:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:25.039 19:17:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.039 19:17:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.039 19:17:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.039 19:17:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:25.039 19:17:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:25.039 19:17:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:25.039 19:17:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.039 19:17:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.039 19:17:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.039 19:17:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:25.039 19:17:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.039 19:17:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.039 19:17:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:25.039 19:17:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.039 19:17:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.039 19:17:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:25.039 19:17:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:25.039 19:17:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.039 19:17:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.039 19:17:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.039 19:17:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.039 19:17:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:25.039 19:17:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.039 19:17:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.039 19:17:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.039 19:17:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:25.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:08:25.039 00:08:25.039 --- 10.0.0.2 ping statistics --- 00:08:25.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.039 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:08:25.039 19:17:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:08:25.039 00:08:25.039 --- 10.0.0.1 ping statistics --- 00:08:25.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.039 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:25.039 19:17:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.039 19:17:22 -- nvmf/common.sh@410 -- # return 0 00:08:25.039 19:17:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:25.039 19:17:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.039 19:17:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:25.039 19:17:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.039 19:17:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:25.039 19:17:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:25.039 19:17:22 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:25.039 19:17:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:25.039 19:17:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.039 19:17:22 -- common/autotest_common.sh@10 -- # set +x 00:08:25.039 19:17:22 -- nvmf/common.sh@469 -- # nvmfpid=1104725 00:08:25.039 19:17:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.039 19:17:22 -- nvmf/common.sh@470 -- # waitforlisten 1104725 00:08:25.039 19:17:22 -- common/autotest_common.sh@829 -- # '[' -z 1104725 ']' 00:08:25.040 19:17:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.040 19:17:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.040 19:17:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.040 19:17:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.040 19:17:22 -- common/autotest_common.sh@10 -- # set +x 00:08:25.040 [2024-11-17 19:17:22.902855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:25.040 [2024-11-17 19:17:22.902922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.040 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.040 [2024-11-17 19:17:22.977184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.040 [2024-11-17 19:17:23.068876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.040 [2024-11-17 19:17:23.069020] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.040 [2024-11-17 19:17:23.069037] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.040 [2024-11-17 19:17:23.069050] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.040 [2024-11-17 19:17:23.069118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.040 [2024-11-17 19:17:23.069145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.040 [2024-11-17 19:17:23.069194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.040 [2024-11-17 19:17:23.069196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.979 19:17:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.979 19:17:23 -- common/autotest_common.sh@862 -- # return 0 00:08:25.979 19:17:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:25.979 19:17:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.979 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.979 19:17:23 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.979 19:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 [2024-11-17 19:17:23.937458] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.979 19:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:23 -- target/discovery.sh@26 -- # seq 1 4 00:08:25.979 19:17:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.979 19:17:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:25.979 19:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 Null1 00:08:25.979 19:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:25.979 19:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:25.979 19:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.979 19:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 [2024-11-17 19:17:23.977754] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.979 19:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.979 19:17:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:25.979 19:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 Null2 00:08:25.979 19:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:25.979 19:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:25.979 19:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:24 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:25.979 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:24 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.979 19:17:24 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:25.979 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 Null3 00:08:25.979 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:24 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:25.979 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:24 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:25.979 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:24 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:25.979 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:24 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.979 19:17:24 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:25.979 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 Null4 00:08:25.979 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:24 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:25.979 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.979 19:17:24 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:25.979 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.979 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.980 19:17:24 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:25.980 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.980 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.980 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.980 19:17:24 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.980 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.980 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.980 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.980 19:17:24 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:25.980 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.980 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.980 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.980 19:17:24 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:26.241 00:08:26.241 Discovery Log Number of Records 6, Generation counter 6 00:08:26.241 =====Discovery Log Entry 0====== 00:08:26.241 trtype: tcp 00:08:26.241 adrfam: ipv4 00:08:26.241 subtype: current discovery subsystem 00:08:26.241 treq: not required 00:08:26.241 portid: 0 00:08:26.241 trsvcid: 4420 00:08:26.241 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:26.241 traddr: 10.0.0.2 00:08:26.241 eflags: explicit discovery connections, duplicate discovery information 00:08:26.241 sectype: none 00:08:26.241 =====Discovery Log Entry 1====== 00:08:26.241 trtype: tcp 00:08:26.241 adrfam: ipv4 00:08:26.241 subtype: nvme subsystem 00:08:26.241 treq: not required 00:08:26.241 portid: 0 00:08:26.241 trsvcid: 4420 00:08:26.241 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:26.241 traddr: 10.0.0.2 00:08:26.241 eflags: none 00:08:26.241 sectype: none 00:08:26.241 =====Discovery Log Entry 2====== 00:08:26.241 trtype: tcp 00:08:26.241 adrfam: ipv4 00:08:26.241 subtype: nvme subsystem 00:08:26.241 treq: not required 00:08:26.241 portid: 0 00:08:26.241 trsvcid: 4420 00:08:26.241 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:26.241 traddr: 10.0.0.2 00:08:26.241 eflags: none 00:08:26.241 sectype: none 00:08:26.241 =====Discovery Log Entry 3====== 00:08:26.241 trtype: tcp 00:08:26.241 adrfam: ipv4 00:08:26.241 subtype: nvme subsystem 00:08:26.241 treq: not required 00:08:26.241 portid: 0 00:08:26.241 trsvcid: 4420 00:08:26.241 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:26.241 traddr: 10.0.0.2 00:08:26.241 eflags: none 00:08:26.241 sectype: none 00:08:26.241 =====Discovery Log Entry 4====== 00:08:26.241 trtype: tcp 00:08:26.241 adrfam: ipv4 00:08:26.241 subtype: nvme subsystem 00:08:26.241 treq: not required 00:08:26.241 portid: 0 00:08:26.241 trsvcid: 4420 00:08:26.241 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:26.241 traddr: 10.0.0.2 00:08:26.241 eflags: none 00:08:26.241 sectype: none 00:08:26.241 =====Discovery Log Entry 5====== 00:08:26.241 trtype: tcp 00:08:26.241 adrfam: ipv4 00:08:26.241 subtype: discovery subsystem referral 00:08:26.241 treq: not required 00:08:26.241 portid: 0 00:08:26.241 trsvcid: 4430 00:08:26.241 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:26.241 traddr: 10.0.0.2 00:08:26.241 eflags: none 00:08:26.241 sectype: none 00:08:26.241 19:17:24 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:26.241 Perform nvmf subsystem discovery via RPC 00:08:26.241 19:17:24 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:26.241 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.241 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.241 [2024-11-17 19:17:24.270536] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:26.241 [ 00:08:26.241 { 00:08:26.241 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:26.241 "subtype": "Discovery", 00:08:26.241 "listen_addresses": [ 00:08:26.241 { 00:08:26.241 "transport": "TCP", 00:08:26.241 "trtype": "TCP", 00:08:26.241 "adrfam": "IPv4", 00:08:26.241 "traddr": "10.0.0.2", 00:08:26.241 "trsvcid": "4420" 00:08:26.241 } 00:08:26.241 ], 00:08:26.241 "allow_any_host": true, 00:08:26.241 "hosts": [] 00:08:26.241 }, 00:08:26.241 { 00:08:26.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.241 "subtype": "NVMe", 00:08:26.241 "listen_addresses": [ 00:08:26.241 { 00:08:26.241 "transport": "TCP", 00:08:26.241 "trtype": "TCP", 00:08:26.241 "adrfam": "IPv4", 00:08:26.241 "traddr": "10.0.0.2", 00:08:26.241 "trsvcid": "4420" 00:08:26.241 } 00:08:26.241 ], 00:08:26.241 "allow_any_host": true, 00:08:26.241 "hosts": [], 00:08:26.241 "serial_number": "SPDK00000000000001", 00:08:26.241 "model_number": "SPDK bdev Controller", 00:08:26.241 "max_namespaces": 32, 00:08:26.241 "min_cntlid": 1, 00:08:26.241 "max_cntlid": 65519, 00:08:26.241 "namespaces": [ 00:08:26.241 { 00:08:26.241 "nsid": 1, 00:08:26.241 "bdev_name": "Null1", 00:08:26.241 "name": "Null1", 00:08:26.241 "nguid": "5E7E8F190D66450EBA946A608877A4CC", 00:08:26.241 "uuid": "5e7e8f19-0d66-450e-ba94-6a608877a4cc" 00:08:26.241 } 00:08:26.241 ] 00:08:26.241 }, 00:08:26.241 { 00:08:26.241 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:26.241 "subtype": "NVMe", 00:08:26.241 "listen_addresses": [ 00:08:26.241 { 00:08:26.241 "transport": "TCP", 00:08:26.241 "trtype": "TCP", 00:08:26.241 "adrfam": "IPv4", 00:08:26.241 "traddr": "10.0.0.2", 00:08:26.241 "trsvcid": "4420" 00:08:26.241 } 00:08:26.241 ], 00:08:26.241 "allow_any_host": true, 00:08:26.241 "hosts": [], 00:08:26.241 "serial_number": "SPDK00000000000002", 00:08:26.241 "model_number": "SPDK bdev Controller", 00:08:26.241 "max_namespaces": 32, 00:08:26.241 "min_cntlid": 1, 00:08:26.241 "max_cntlid": 65519, 00:08:26.241 "namespaces": [ 00:08:26.241 { 00:08:26.241 "nsid": 1, 00:08:26.241 "bdev_name": "Null2", 00:08:26.241 "name": "Null2", 00:08:26.241 "nguid": "572249BA2FCB4FF886CBF415C967F3A9", 00:08:26.241 "uuid": "572249ba-2fcb-4ff8-86cb-f415c967f3a9" 00:08:26.242 } 00:08:26.242 ] 00:08:26.242 }, 00:08:26.242 { 00:08:26.242 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:26.242 "subtype": "NVMe", 00:08:26.242 "listen_addresses": [ 00:08:26.242 { 00:08:26.242 "transport": "TCP", 00:08:26.242 "trtype": "TCP", 00:08:26.242 "adrfam": "IPv4", 00:08:26.242 "traddr": "10.0.0.2", 00:08:26.242 "trsvcid": "4420" 00:08:26.242 } 00:08:26.242 ], 00:08:26.242 "allow_any_host": true, 00:08:26.242 "hosts": [], 00:08:26.242 "serial_number": "SPDK00000000000003", 00:08:26.242 "model_number": "SPDK bdev Controller", 00:08:26.242 "max_namespaces": 32, 00:08:26.242 "min_cntlid": 1, 00:08:26.242 "max_cntlid": 65519, 00:08:26.242 "namespaces": [ 00:08:26.242 { 00:08:26.242 "nsid": 1, 00:08:26.242 "bdev_name": "Null3", 00:08:26.242 "name": "Null3", 00:08:26.242 "nguid": "0101F926E6384704BBC56B5A8EEEC8F3", 00:08:26.242 "uuid": "0101f926-e638-4704-bbc5-6b5a8eeec8f3" 00:08:26.242 } 00:08:26.242 ] 00:08:26.242 }, 00:08:26.242 { 00:08:26.242 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:26.242 "subtype": "NVMe", 00:08:26.242 "listen_addresses": [ 00:08:26.242 { 00:08:26.242 "transport": "TCP", 00:08:26.242 "trtype": "TCP", 00:08:26.242 "adrfam": "IPv4", 00:08:26.242 "traddr": "10.0.0.2", 00:08:26.242 "trsvcid": "4420" 00:08:26.242 } 00:08:26.242 ], 00:08:26.242 "allow_any_host": true, 00:08:26.242 "hosts": [], 00:08:26.242 "serial_number": "SPDK00000000000004", 00:08:26.242 "model_number": "SPDK bdev Controller", 00:08:26.242 "max_namespaces": 32, 00:08:26.242 "min_cntlid": 1, 00:08:26.242 "max_cntlid": 65519, 00:08:26.242 "namespaces": [ 00:08:26.242 { 00:08:26.242 "nsid": 1, 00:08:26.242 "bdev_name": "Null4", 00:08:26.242 "name": "Null4", 00:08:26.242 "nguid": "0A3E49113F264D56AA8780DF9BB7A958", 00:08:26.242 "uuid": "0a3e4911-3f26-4d56-aa87-80df9bb7a958" 00:08:26.242 } 00:08:26.242 ] 00:08:26.242 } 00:08:26.242 ] 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@42 -- # seq 1 4 00:08:26.242 19:17:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.242 19:17:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.242 19:17:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.242 19:17:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:26.242 19:17:24 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:26.242 19:17:24 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:26.242 19:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.242 19:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 19:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.242 19:17:24 -- target/discovery.sh@49 -- # check_bdevs= 00:08:26.242 19:17:24 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:26.242 19:17:24 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:26.242 19:17:24 -- target/discovery.sh@57 -- # nvmftestfini 00:08:26.242 19:17:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:26.242 19:17:24 -- nvmf/common.sh@116 -- # sync 00:08:26.242 19:17:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:26.242 19:17:24 -- nvmf/common.sh@119 -- # set +e 00:08:26.242 19:17:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:26.242 19:17:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:26.242 rmmod nvme_tcp 00:08:26.242 rmmod nvme_fabrics 00:08:26.242 rmmod nvme_keyring 00:08:26.242 19:17:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:26.242 19:17:24 -- nvmf/common.sh@123 -- # set -e 00:08:26.242 19:17:24 -- nvmf/common.sh@124 -- # return 0 00:08:26.242 19:17:24 -- nvmf/common.sh@477 -- # '[' -n 1104725 ']' 00:08:26.242 19:17:24 -- nvmf/common.sh@478 -- # killprocess 1104725 00:08:26.242 19:17:24 -- common/autotest_common.sh@936 -- # '[' -z 1104725 ']' 00:08:26.242 19:17:24 -- common/autotest_common.sh@940 -- # kill -0 1104725 00:08:26.242 19:17:24 -- common/autotest_common.sh@941 -- # uname 00:08:26.242 19:17:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:26.242 19:17:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1104725 00:08:26.242 19:17:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:26.242 19:17:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:26.242 19:17:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1104725' 00:08:26.242 killing process with pid 1104725 00:08:26.242 19:17:24 -- common/autotest_common.sh@955 -- # kill 1104725 00:08:26.242 [2024-11-17 19:17:24.477322] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:26.242 19:17:24 -- common/autotest_common.sh@960 -- # wait 1104725 00:08:26.503 19:17:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:26.503 19:17:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:26.503 19:17:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:26.503 19:17:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.503 19:17:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:26.503 19:17:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.503 19:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.503 19:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.051 19:17:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:29.051 00:08:29.051 real 0m6.279s 00:08:29.051 user 0m7.628s 00:08:29.051 sys 0m1.957s 00:08:29.051 19:17:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.052 19:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:29.052 ************************************ 00:08:29.052 END TEST nvmf_discovery 00:08:29.052 ************************************ 00:08:29.052 19:17:26 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:29.052 19:17:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:29.052 19:17:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.052 19:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:29.052 ************************************ 00:08:29.052 START TEST nvmf_referrals 00:08:29.052 ************************************ 00:08:29.052 19:17:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:29.052 * Looking for test storage... 00:08:29.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.052 19:17:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:29.052 19:17:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:29.052 19:17:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:29.052 19:17:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:29.052 19:17:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:29.052 19:17:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:29.052 19:17:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:29.052 19:17:26 -- scripts/common.sh@335 -- # IFS=.-: 00:08:29.052 19:17:26 -- scripts/common.sh@335 -- # read -ra ver1 00:08:29.052 19:17:26 -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.052 19:17:26 -- scripts/common.sh@336 -- # read -ra ver2 00:08:29.052 19:17:26 -- scripts/common.sh@337 -- # local 'op=<' 00:08:29.052 19:17:26 -- scripts/common.sh@339 -- # ver1_l=2 00:08:29.052 19:17:26 -- scripts/common.sh@340 -- # ver2_l=1 00:08:29.052 19:17:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:29.052 19:17:26 -- scripts/common.sh@343 -- # case "$op" in 00:08:29.052 19:17:26 -- scripts/common.sh@344 -- # : 1 00:08:29.052 19:17:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:29.052 19:17:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.052 19:17:26 -- scripts/common.sh@364 -- # decimal 1 00:08:29.052 19:17:26 -- scripts/common.sh@352 -- # local d=1 00:08:29.052 19:17:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.052 19:17:26 -- scripts/common.sh@354 -- # echo 1 00:08:29.052 19:17:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:29.052 19:17:26 -- scripts/common.sh@365 -- # decimal 2 00:08:29.052 19:17:26 -- scripts/common.sh@352 -- # local d=2 00:08:29.052 19:17:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.052 19:17:26 -- scripts/common.sh@354 -- # echo 2 00:08:29.052 19:17:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:29.052 19:17:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:29.052 19:17:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:29.052 19:17:26 -- scripts/common.sh@367 -- # return 0 00:08:29.052 19:17:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.052 19:17:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:29.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.052 --rc genhtml_branch_coverage=1 00:08:29.052 --rc genhtml_function_coverage=1 00:08:29.052 --rc genhtml_legend=1 00:08:29.052 --rc geninfo_all_blocks=1 00:08:29.052 --rc geninfo_unexecuted_blocks=1 00:08:29.052 00:08:29.052 ' 00:08:29.052 19:17:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:29.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.052 --rc genhtml_branch_coverage=1 00:08:29.052 --rc genhtml_function_coverage=1 00:08:29.052 --rc genhtml_legend=1 00:08:29.052 --rc geninfo_all_blocks=1 00:08:29.052 --rc geninfo_unexecuted_blocks=1 00:08:29.052 00:08:29.052 ' 00:08:29.052 19:17:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:29.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.052 --rc genhtml_branch_coverage=1 00:08:29.052 --rc genhtml_function_coverage=1 00:08:29.052 --rc genhtml_legend=1 00:08:29.052 --rc geninfo_all_blocks=1 00:08:29.052 --rc geninfo_unexecuted_blocks=1 00:08:29.052 00:08:29.052 ' 00:08:29.052 19:17:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:29.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.052 --rc genhtml_branch_coverage=1 00:08:29.052 --rc genhtml_function_coverage=1 00:08:29.052 --rc genhtml_legend=1 00:08:29.052 --rc geninfo_all_blocks=1 00:08:29.052 --rc geninfo_unexecuted_blocks=1 00:08:29.052 00:08:29.052 ' 00:08:29.052 19:17:26 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.052 19:17:26 -- nvmf/common.sh@7 -- # uname -s 00:08:29.052 19:17:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.052 19:17:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.052 19:17:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.052 19:17:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.052 19:17:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.052 19:17:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.052 19:17:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.052 19:17:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.052 19:17:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.052 19:17:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.052 19:17:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.052 19:17:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.052 19:17:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.052 19:17:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.052 19:17:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.052 19:17:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.052 19:17:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.052 19:17:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.052 19:17:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.052 19:17:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.052 19:17:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.052 19:17:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.052 19:17:26 -- paths/export.sh@5 -- # export PATH 00:08:29.052 19:17:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.052 19:17:26 -- nvmf/common.sh@46 -- # : 0 00:08:29.052 19:17:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:29.052 19:17:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:29.052 19:17:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:29.052 19:17:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.052 19:17:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.052 19:17:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:29.052 19:17:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:29.052 19:17:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:29.052 19:17:26 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:29.052 19:17:26 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:29.052 19:17:26 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:29.053 19:17:26 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:29.053 19:17:26 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:29.053 19:17:26 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:29.053 19:17:26 -- target/referrals.sh@37 -- # nvmftestinit 00:08:29.053 19:17:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:29.053 19:17:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.053 19:17:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:29.053 19:17:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:29.053 19:17:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:29.053 19:17:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.053 19:17:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.053 19:17:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.053 19:17:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:29.053 19:17:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:29.053 19:17:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:29.053 19:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:30.957 19:17:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:30.957 19:17:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:30.957 19:17:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:30.957 19:17:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:30.957 19:17:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:30.957 19:17:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:30.957 19:17:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:30.957 19:17:29 -- nvmf/common.sh@294 -- # net_devs=() 00:08:30.957 19:17:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:30.957 19:17:29 -- nvmf/common.sh@295 -- # e810=() 00:08:30.957 19:17:29 -- nvmf/common.sh@295 -- # local -ga e810 00:08:30.957 19:17:29 -- nvmf/common.sh@296 -- # x722=() 00:08:30.957 19:17:29 -- nvmf/common.sh@296 -- # local -ga x722 00:08:30.957 19:17:29 -- nvmf/common.sh@297 -- # mlx=() 00:08:30.957 19:17:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:30.957 19:17:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.957 19:17:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:30.957 19:17:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:30.957 19:17:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:30.957 19:17:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:30.957 19:17:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.957 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.957 19:17:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:30.957 19:17:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.957 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.957 19:17:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:30.957 19:17:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:30.957 19:17:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.957 19:17:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:30.957 19:17:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.957 19:17:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.957 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.957 19:17:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.957 19:17:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:30.957 19:17:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.957 19:17:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:30.957 19:17:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.957 19:17:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.957 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.957 19:17:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.957 19:17:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:30.957 19:17:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:30.957 19:17:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:30.957 19:17:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.957 19:17:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.957 19:17:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.957 19:17:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:30.957 19:17:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.957 19:17:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.957 19:17:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:30.957 19:17:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.957 19:17:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.957 19:17:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:30.957 19:17:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:30.957 19:17:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.957 19:17:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.957 19:17:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.957 19:17:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.957 19:17:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:30.957 19:17:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.957 19:17:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.957 19:17:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.957 19:17:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:30.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:08:30.957 00:08:30.957 --- 10.0.0.2 ping statistics --- 00:08:30.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.957 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:08:30.957 19:17:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:08:30.957 00:08:30.957 --- 10.0.0.1 ping statistics --- 00:08:30.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.957 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:30.957 19:17:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.957 19:17:29 -- nvmf/common.sh@410 -- # return 0 00:08:30.957 19:17:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:30.957 19:17:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.957 19:17:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:30.957 19:17:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.957 19:17:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:30.957 19:17:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:30.957 19:17:29 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:30.957 19:17:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:30.957 19:17:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.957 19:17:29 -- common/autotest_common.sh@10 -- # set +x 00:08:30.957 19:17:29 -- nvmf/common.sh@469 -- # nvmfpid=1106977 00:08:30.957 19:17:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.957 19:17:29 -- nvmf/common.sh@470 -- # waitforlisten 1106977 00:08:30.957 19:17:29 -- common/autotest_common.sh@829 -- # '[' -z 1106977 ']' 00:08:30.957 19:17:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.958 19:17:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.958 19:17:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.958 19:17:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.958 19:17:29 -- common/autotest_common.sh@10 -- # set +x 00:08:31.218 [2024-11-17 19:17:29.234574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:31.218 [2024-11-17 19:17:29.234646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.218 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.218 [2024-11-17 19:17:29.299300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.218 [2024-11-17 19:17:29.385405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:31.218 [2024-11-17 19:17:29.385563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.218 [2024-11-17 19:17:29.385583] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.218 [2024-11-17 19:17:29.385597] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.218 [2024-11-17 19:17:29.385694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.218 [2024-11-17 19:17:29.385743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.218 [2024-11-17 19:17:29.385797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.218 [2024-11-17 19:17:29.385799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.155 19:17:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.155 19:17:30 -- common/autotest_common.sh@862 -- # return 0 00:08:32.155 19:17:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:32.155 19:17:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.155 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.155 19:17:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.155 19:17:30 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.155 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.155 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.155 [2024-11-17 19:17:30.240420] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.155 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.155 19:17:30 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:32.155 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.155 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.155 [2024-11-17 19:17:30.252628] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:32.155 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.155 19:17:30 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:32.155 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.155 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.155 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.155 19:17:30 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:32.155 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.155 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.155 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.155 19:17:30 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:32.155 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.155 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.155 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.155 19:17:30 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.155 19:17:30 -- target/referrals.sh@48 -- # jq length 00:08:32.155 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.155 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.155 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.155 19:17:30 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:32.155 19:17:30 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:32.155 19:17:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:32.155 19:17:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.155 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.155 19:17:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:32.155 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.155 19:17:30 -- target/referrals.sh@21 -- # sort 00:08:32.155 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.155 19:17:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:32.155 19:17:30 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:32.155 19:17:30 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:32.155 19:17:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:32.155 19:17:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:32.155 19:17:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.155 19:17:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:32.155 19:17:30 -- target/referrals.sh@26 -- # sort 00:08:32.413 19:17:30 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:32.413 19:17:30 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:32.413 19:17:30 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:32.413 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.413 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.413 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.413 19:17:30 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:32.413 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.413 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.414 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.414 19:17:30 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:32.414 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.414 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.414 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.414 19:17:30 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.414 19:17:30 -- target/referrals.sh@56 -- # jq length 00:08:32.414 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.414 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.414 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.414 19:17:30 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:32.414 19:17:30 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:32.414 19:17:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:32.414 19:17:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:32.414 19:17:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.414 19:17:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:32.414 19:17:30 -- target/referrals.sh@26 -- # sort 00:08:32.673 19:17:30 -- target/referrals.sh@26 -- # echo 00:08:32.673 19:17:30 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:32.673 19:17:30 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:32.673 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.673 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.673 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.673 19:17:30 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:32.673 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.673 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.673 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.673 19:17:30 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:32.673 19:17:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:32.673 19:17:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.673 19:17:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:32.673 19:17:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.673 19:17:30 -- target/referrals.sh@21 -- # sort 00:08:32.673 19:17:30 -- common/autotest_common.sh@10 -- # set +x 00:08:32.673 19:17:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.673 19:17:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:32.673 19:17:30 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:32.673 19:17:30 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:32.673 19:17:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:32.673 19:17:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:32.673 19:17:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.673 19:17:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:32.673 19:17:30 -- target/referrals.sh@26 -- # sort 00:08:32.931 19:17:31 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:32.931 19:17:31 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:32.931 19:17:31 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:32.931 19:17:31 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:32.931 19:17:31 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:32.931 19:17:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.931 19:17:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:33.190 19:17:31 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:33.190 19:17:31 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:33.190 19:17:31 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:33.190 19:17:31 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:33.190 19:17:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.190 19:17:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:33.449 19:17:31 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:33.449 19:17:31 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:33.449 19:17:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.449 19:17:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.449 19:17:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.449 19:17:31 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:33.449 19:17:31 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.449 19:17:31 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.449 19:17:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.449 19:17:31 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.449 19:17:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.450 19:17:31 -- target/referrals.sh@21 -- # sort 00:08:33.450 19:17:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.450 19:17:31 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:33.450 19:17:31 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:33.450 19:17:31 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:33.450 19:17:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.450 19:17:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.450 19:17:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.450 19:17:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.450 19:17:31 -- target/referrals.sh@26 -- # sort 00:08:33.450 19:17:31 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:33.450 19:17:31 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:33.450 19:17:31 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:33.450 19:17:31 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:33.450 19:17:31 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:33.450 19:17:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.450 19:17:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:33.708 19:17:31 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:33.708 19:17:31 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:33.708 19:17:31 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:33.708 19:17:31 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:33.708 19:17:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.708 19:17:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:33.708 19:17:31 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:33.708 19:17:31 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:33.708 19:17:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.708 19:17:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.708 19:17:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.708 19:17:31 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.708 19:17:31 -- target/referrals.sh@82 -- # jq length 00:08:33.708 19:17:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.708 19:17:31 -- common/autotest_common.sh@10 -- # set +x 00:08:33.966 19:17:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.966 19:17:32 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:33.966 19:17:32 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:33.966 19:17:32 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.966 19:17:32 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.966 19:17:32 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.966 19:17:32 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.966 19:17:32 -- target/referrals.sh@26 -- # sort 00:08:34.226 19:17:32 -- target/referrals.sh@26 -- # echo 00:08:34.226 19:17:32 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:34.226 19:17:32 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:34.226 19:17:32 -- target/referrals.sh@86 -- # nvmftestfini 00:08:34.226 19:17:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:34.226 19:17:32 -- nvmf/common.sh@116 -- # sync 00:08:34.226 19:17:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:34.226 19:17:32 -- nvmf/common.sh@119 -- # set +e 00:08:34.226 19:17:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:34.226 19:17:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:34.226 rmmod nvme_tcp 00:08:34.226 rmmod nvme_fabrics 00:08:34.226 rmmod nvme_keyring 00:08:34.226 19:17:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:34.226 19:17:32 -- nvmf/common.sh@123 -- # set -e 00:08:34.226 19:17:32 -- nvmf/common.sh@124 -- # return 0 00:08:34.226 19:17:32 -- nvmf/common.sh@477 -- # '[' -n 1106977 ']' 00:08:34.226 19:17:32 -- nvmf/common.sh@478 -- # killprocess 1106977 00:08:34.226 19:17:32 -- common/autotest_common.sh@936 -- # '[' -z 1106977 ']' 00:08:34.226 19:17:32 -- common/autotest_common.sh@940 -- # kill -0 1106977 00:08:34.226 19:17:32 -- common/autotest_common.sh@941 -- # uname 00:08:34.226 19:17:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:34.226 19:17:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1106977 00:08:34.226 19:17:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:34.226 19:17:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:34.226 19:17:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1106977' 00:08:34.226 killing process with pid 1106977 00:08:34.226 19:17:32 -- common/autotest_common.sh@955 -- # kill 1106977 00:08:34.226 19:17:32 -- common/autotest_common.sh@960 -- # wait 1106977 00:08:34.488 19:17:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:34.488 19:17:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:34.488 19:17:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:34.488 19:17:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.488 19:17:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:34.488 19:17:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.488 19:17:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.488 19:17:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.398 19:17:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:36.398 00:08:36.398 real 0m7.832s 00:08:36.398 user 0m14.387s 00:08:36.398 sys 0m2.310s 00:08:36.398 19:17:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.398 19:17:34 -- common/autotest_common.sh@10 -- # set +x 00:08:36.398 ************************************ 00:08:36.398 END TEST nvmf_referrals 00:08:36.398 ************************************ 00:08:36.398 19:17:34 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:36.398 19:17:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:36.398 19:17:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.398 19:17:34 -- common/autotest_common.sh@10 -- # set +x 00:08:36.398 ************************************ 00:08:36.398 START TEST nvmf_connect_disconnect 00:08:36.398 ************************************ 00:08:36.398 19:17:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:36.658 * Looking for test storage... 00:08:36.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.658 19:17:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:36.658 19:17:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:36.658 19:17:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:36.658 19:17:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:36.658 19:17:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:36.658 19:17:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:36.658 19:17:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:36.658 19:17:34 -- scripts/common.sh@335 -- # IFS=.-: 00:08:36.658 19:17:34 -- scripts/common.sh@335 -- # read -ra ver1 00:08:36.658 19:17:34 -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.658 19:17:34 -- scripts/common.sh@336 -- # read -ra ver2 00:08:36.658 19:17:34 -- scripts/common.sh@337 -- # local 'op=<' 00:08:36.658 19:17:34 -- scripts/common.sh@339 -- # ver1_l=2 00:08:36.658 19:17:34 -- scripts/common.sh@340 -- # ver2_l=1 00:08:36.658 19:17:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:36.658 19:17:34 -- scripts/common.sh@343 -- # case "$op" in 00:08:36.658 19:17:34 -- scripts/common.sh@344 -- # : 1 00:08:36.658 19:17:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:36.658 19:17:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.658 19:17:34 -- scripts/common.sh@364 -- # decimal 1 00:08:36.658 19:17:34 -- scripts/common.sh@352 -- # local d=1 00:08:36.658 19:17:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.658 19:17:34 -- scripts/common.sh@354 -- # echo 1 00:08:36.658 19:17:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:36.658 19:17:34 -- scripts/common.sh@365 -- # decimal 2 00:08:36.659 19:17:34 -- scripts/common.sh@352 -- # local d=2 00:08:36.659 19:17:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.659 19:17:34 -- scripts/common.sh@354 -- # echo 2 00:08:36.659 19:17:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:36.659 19:17:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:36.659 19:17:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:36.659 19:17:34 -- scripts/common.sh@367 -- # return 0 00:08:36.659 19:17:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.659 19:17:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:36.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.659 --rc genhtml_branch_coverage=1 00:08:36.659 --rc genhtml_function_coverage=1 00:08:36.659 --rc genhtml_legend=1 00:08:36.659 --rc geninfo_all_blocks=1 00:08:36.659 --rc geninfo_unexecuted_blocks=1 00:08:36.659 00:08:36.659 ' 00:08:36.659 19:17:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:36.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.659 --rc genhtml_branch_coverage=1 00:08:36.659 --rc genhtml_function_coverage=1 00:08:36.659 --rc genhtml_legend=1 00:08:36.659 --rc geninfo_all_blocks=1 00:08:36.659 --rc geninfo_unexecuted_blocks=1 00:08:36.659 00:08:36.659 ' 00:08:36.659 19:17:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:36.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.659 --rc genhtml_branch_coverage=1 00:08:36.659 --rc genhtml_function_coverage=1 00:08:36.659 --rc genhtml_legend=1 00:08:36.659 --rc geninfo_all_blocks=1 00:08:36.659 --rc geninfo_unexecuted_blocks=1 00:08:36.659 00:08:36.659 ' 00:08:36.659 19:17:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:36.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.659 --rc genhtml_branch_coverage=1 00:08:36.659 --rc genhtml_function_coverage=1 00:08:36.659 --rc genhtml_legend=1 00:08:36.659 --rc geninfo_all_blocks=1 00:08:36.659 --rc geninfo_unexecuted_blocks=1 00:08:36.659 00:08:36.659 ' 00:08:36.659 19:17:34 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.659 19:17:34 -- nvmf/common.sh@7 -- # uname -s 00:08:36.659 19:17:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.659 19:17:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.659 19:17:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.659 19:17:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.659 19:17:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.659 19:17:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.659 19:17:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.659 19:17:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.659 19:17:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.659 19:17:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.659 19:17:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.659 19:17:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.659 19:17:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.659 19:17:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.659 19:17:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.659 19:17:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.659 19:17:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.659 19:17:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.659 19:17:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.659 19:17:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.659 19:17:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.659 19:17:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.659 19:17:34 -- paths/export.sh@5 -- # export PATH 00:08:36.659 19:17:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.659 19:17:34 -- nvmf/common.sh@46 -- # : 0 00:08:36.659 19:17:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:36.659 19:17:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:36.659 19:17:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:36.659 19:17:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.659 19:17:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.659 19:17:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:36.659 19:17:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:36.659 19:17:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:36.659 19:17:34 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.659 19:17:34 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:36.659 19:17:34 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:36.659 19:17:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:36.659 19:17:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.659 19:17:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:36.659 19:17:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:36.659 19:17:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:36.659 19:17:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.659 19:17:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.659 19:17:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.659 19:17:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:36.659 19:17:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:36.659 19:17:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:36.659 19:17:34 -- common/autotest_common.sh@10 -- # set +x 00:08:38.596 19:17:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:38.596 19:17:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:38.596 19:17:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:38.596 19:17:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:38.596 19:17:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:38.596 19:17:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:38.596 19:17:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:38.596 19:17:36 -- nvmf/common.sh@294 -- # net_devs=() 00:08:38.596 19:17:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:38.596 19:17:36 -- nvmf/common.sh@295 -- # e810=() 00:08:38.596 19:17:36 -- nvmf/common.sh@295 -- # local -ga e810 00:08:38.596 19:17:36 -- nvmf/common.sh@296 -- # x722=() 00:08:38.596 19:17:36 -- nvmf/common.sh@296 -- # local -ga x722 00:08:38.596 19:17:36 -- nvmf/common.sh@297 -- # mlx=() 00:08:38.596 19:17:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:38.596 19:17:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.596 19:17:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:38.596 19:17:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:38.596 19:17:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:38.596 19:17:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.596 19:17:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:38.596 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:38.596 19:17:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.596 19:17:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:38.596 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:38.596 19:17:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:38.596 19:17:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.596 19:17:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.596 19:17:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.596 19:17:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.596 19:17:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:38.596 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:38.596 19:17:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.596 19:17:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.596 19:17:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.596 19:17:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.596 19:17:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.596 19:17:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:38.596 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:38.596 19:17:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.596 19:17:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:38.596 19:17:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:38.596 19:17:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:38.596 19:17:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:38.596 19:17:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.596 19:17:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.597 19:17:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.597 19:17:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:38.597 19:17:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.597 19:17:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.597 19:17:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:38.597 19:17:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.597 19:17:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.597 19:17:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:38.597 19:17:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:38.597 19:17:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.597 19:17:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.856 19:17:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.856 19:17:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.856 19:17:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:38.856 19:17:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.856 19:17:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.856 19:17:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.856 19:17:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:38.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:08:38.856 00:08:38.856 --- 10.0.0.2 ping statistics --- 00:08:38.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.856 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:08:38.856 19:17:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:08:38.856 00:08:38.856 --- 10.0.0.1 ping statistics --- 00:08:38.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.856 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:08:38.856 19:17:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.856 19:17:36 -- nvmf/common.sh@410 -- # return 0 00:08:38.856 19:17:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.856 19:17:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.856 19:17:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:38.856 19:17:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:38.856 19:17:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.856 19:17:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:38.856 19:17:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:38.856 19:17:36 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:38.856 19:17:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.856 19:17:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.856 19:17:36 -- common/autotest_common.sh@10 -- # set +x 00:08:38.856 19:17:36 -- nvmf/common.sh@469 -- # nvmfpid=1109431 00:08:38.856 19:17:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.856 19:17:36 -- nvmf/common.sh@470 -- # waitforlisten 1109431 00:08:38.856 19:17:36 -- common/autotest_common.sh@829 -- # '[' -z 1109431 ']' 00:08:38.856 19:17:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.856 19:17:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.856 19:17:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.856 19:17:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.856 19:17:36 -- common/autotest_common.sh@10 -- # set +x 00:08:38.856 [2024-11-17 19:17:37.037402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:38.856 [2024-11-17 19:17:37.037503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.856 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.856 [2024-11-17 19:17:37.106974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.116 [2024-11-17 19:17:37.200448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:39.116 [2024-11-17 19:17:37.200623] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.116 [2024-11-17 19:17:37.200643] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.116 [2024-11-17 19:17:37.200658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.116 [2024-11-17 19:17:37.200741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.116 [2024-11-17 19:17:37.200801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.116 [2024-11-17 19:17:37.200857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.116 [2024-11-17 19:17:37.200861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.053 19:17:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.053 19:17:37 -- common/autotest_common.sh@862 -- # return 0 00:08:40.053 19:17:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:40.053 19:17:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.053 19:17:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.053 19:17:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:40.053 19:17:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.053 19:17:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.053 [2024-11-17 19:17:38.021322] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.053 19:17:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:40.053 19:17:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.053 19:17:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.053 19:17:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:40.053 19:17:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.053 19:17:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.053 19:17:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.053 19:17:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.053 19:17:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.053 19:17:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.053 19:17:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.053 19:17:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.053 [2024-11-17 19:17:38.072952] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.053 19:17:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:40.053 19:17:38 -- target/connect_disconnect.sh@34 -- # set +x 00:08:42.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.919 [2024-11-17 19:18:26.917500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c4370 is same with the state(5) to be set 00:09:28.919 [2024-11-17 19:18:26.917606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c4370 is same with the state(5) to be set 00:09:28.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.860 [2024-11-17 19:18:47.993484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c43e0 is same with the state(5) to be set 00:09:49.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.206 [2024-11-17 19:19:36.375415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863be0 is same with the state(5) to be set 00:10:38.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.740 [2024-11-17 19:19:45.623477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863be0 is same with the state(5) to be set 00:10:47.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.608 [2024-11-17 19:20:22.691399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863be0 is same with the state(5) to be set 00:11:24.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.130 [2024-11-17 19:20:32.023513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863be0 is same with the state(5) to be set 00:11:34.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.581 [2024-11-17 19:20:43.649530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863be0 is same with the state(5) to be set 00:11:45.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.080 [2024-11-17 19:20:59.948669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863be0 is same with the state(5) to be set 00:12:02.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.956 [2024-11-17 19:21:09.074470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863be0 is same with the state(5) to be set 00:12:10.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.029 [2024-11-17 19:21:27.798538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863be0 is same with the state(5) to be set 00:12:30.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.925 19:21:30 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:31.925 19:21:30 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:31.925 19:21:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:31.925 19:21:30 -- nvmf/common.sh@116 -- # sync 00:12:31.925 19:21:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:31.925 19:21:30 -- nvmf/common.sh@119 -- # set +e 00:12:31.925 19:21:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:31.925 19:21:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:31.925 rmmod nvme_tcp 00:12:32.183 rmmod nvme_fabrics 00:12:32.183 rmmod nvme_keyring 00:12:32.183 19:21:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:32.183 19:21:30 -- nvmf/common.sh@123 -- # set -e 00:12:32.183 19:21:30 -- nvmf/common.sh@124 -- # return 0 00:12:32.183 19:21:30 -- nvmf/common.sh@477 -- # '[' -n 1109431 ']' 00:12:32.183 19:21:30 -- nvmf/common.sh@478 -- # killprocess 1109431 00:12:32.183 19:21:30 -- common/autotest_common.sh@936 -- # '[' -z 1109431 ']' 00:12:32.183 19:21:30 -- common/autotest_common.sh@940 -- # kill -0 1109431 00:12:32.183 19:21:30 -- common/autotest_common.sh@941 -- # uname 00:12:32.183 19:21:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:32.183 19:21:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1109431 00:12:32.183 19:21:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:32.183 19:21:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:32.183 19:21:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1109431' 00:12:32.183 killing process with pid 1109431 00:12:32.183 19:21:30 -- common/autotest_common.sh@955 -- # kill 1109431 00:12:32.183 19:21:30 -- common/autotest_common.sh@960 -- # wait 1109431 00:12:32.441 19:21:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:32.441 19:21:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:32.441 19:21:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:32.441 19:21:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.441 19:21:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:32.441 19:21:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.441 19:21:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.441 19:21:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.342 19:21:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:34.342 00:12:34.342 real 3m57.923s 00:12:34.342 user 15m6.294s 00:12:34.342 sys 0m36.051s 00:12:34.342 19:21:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:34.342 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:34.342 ************************************ 00:12:34.342 END TEST nvmf_connect_disconnect 00:12:34.342 ************************************ 00:12:34.342 19:21:32 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:34.342 19:21:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:34.342 19:21:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.342 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:34.342 ************************************ 00:12:34.342 START TEST nvmf_multitarget 00:12:34.342 ************************************ 00:12:34.342 19:21:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:34.602 * Looking for test storage... 00:12:34.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.602 19:21:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:34.602 19:21:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:34.602 19:21:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:34.602 19:21:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:34.602 19:21:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:34.602 19:21:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:34.602 19:21:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:34.602 19:21:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:34.602 19:21:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:34.602 19:21:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.602 19:21:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:34.602 19:21:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:34.602 19:21:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:34.602 19:21:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:34.602 19:21:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:34.602 19:21:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:34.602 19:21:32 -- scripts/common.sh@344 -- # : 1 00:12:34.602 19:21:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:34.602 19:21:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.602 19:21:32 -- scripts/common.sh@364 -- # decimal 1 00:12:34.602 19:21:32 -- scripts/common.sh@352 -- # local d=1 00:12:34.602 19:21:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.602 19:21:32 -- scripts/common.sh@354 -- # echo 1 00:12:34.602 19:21:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:34.602 19:21:32 -- scripts/common.sh@365 -- # decimal 2 00:12:34.602 19:21:32 -- scripts/common.sh@352 -- # local d=2 00:12:34.602 19:21:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.602 19:21:32 -- scripts/common.sh@354 -- # echo 2 00:12:34.602 19:21:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:34.602 19:21:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:34.602 19:21:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:34.602 19:21:32 -- scripts/common.sh@367 -- # return 0 00:12:34.602 19:21:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.602 19:21:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:34.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.602 --rc genhtml_branch_coverage=1 00:12:34.602 --rc genhtml_function_coverage=1 00:12:34.602 --rc genhtml_legend=1 00:12:34.602 --rc geninfo_all_blocks=1 00:12:34.602 --rc geninfo_unexecuted_blocks=1 00:12:34.602 00:12:34.602 ' 00:12:34.602 19:21:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:34.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.602 --rc genhtml_branch_coverage=1 00:12:34.602 --rc genhtml_function_coverage=1 00:12:34.602 --rc genhtml_legend=1 00:12:34.602 --rc geninfo_all_blocks=1 00:12:34.602 --rc geninfo_unexecuted_blocks=1 00:12:34.602 00:12:34.602 ' 00:12:34.602 19:21:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:34.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.602 --rc genhtml_branch_coverage=1 00:12:34.602 --rc genhtml_function_coverage=1 00:12:34.602 --rc genhtml_legend=1 00:12:34.602 --rc geninfo_all_blocks=1 00:12:34.602 --rc geninfo_unexecuted_blocks=1 00:12:34.602 00:12:34.602 ' 00:12:34.602 19:21:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:34.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.602 --rc genhtml_branch_coverage=1 00:12:34.602 --rc genhtml_function_coverage=1 00:12:34.602 --rc genhtml_legend=1 00:12:34.602 --rc geninfo_all_blocks=1 00:12:34.602 --rc geninfo_unexecuted_blocks=1 00:12:34.602 00:12:34.602 ' 00:12:34.602 19:21:32 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.602 19:21:32 -- nvmf/common.sh@7 -- # uname -s 00:12:34.602 19:21:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.602 19:21:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.602 19:21:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.602 19:21:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.602 19:21:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.602 19:21:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.602 19:21:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.602 19:21:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.602 19:21:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.602 19:21:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.602 19:21:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.602 19:21:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.602 19:21:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.602 19:21:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.602 19:21:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.602 19:21:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.602 19:21:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.602 19:21:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.602 19:21:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.602 19:21:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.602 19:21:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.602 19:21:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.602 19:21:32 -- paths/export.sh@5 -- # export PATH 00:12:34.602 19:21:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.603 19:21:32 -- nvmf/common.sh@46 -- # : 0 00:12:34.603 19:21:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:34.603 19:21:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:34.603 19:21:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:34.603 19:21:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.603 19:21:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.603 19:21:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:34.603 19:21:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:34.603 19:21:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:34.603 19:21:32 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:34.603 19:21:32 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:34.603 19:21:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:34.603 19:21:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.603 19:21:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:34.603 19:21:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:34.603 19:21:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:34.603 19:21:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.603 19:21:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.603 19:21:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.603 19:21:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:34.603 19:21:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:34.603 19:21:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:34.603 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:36.506 19:21:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:36.506 19:21:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:36.506 19:21:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:36.506 19:21:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:36.506 19:21:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:36.506 19:21:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:36.506 19:21:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:36.506 19:21:34 -- nvmf/common.sh@294 -- # net_devs=() 00:12:36.506 19:21:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:36.506 19:21:34 -- nvmf/common.sh@295 -- # e810=() 00:12:36.506 19:21:34 -- nvmf/common.sh@295 -- # local -ga e810 00:12:36.506 19:21:34 -- nvmf/common.sh@296 -- # x722=() 00:12:36.506 19:21:34 -- nvmf/common.sh@296 -- # local -ga x722 00:12:36.506 19:21:34 -- nvmf/common.sh@297 -- # mlx=() 00:12:36.506 19:21:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:36.506 19:21:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.506 19:21:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:36.506 19:21:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:36.506 19:21:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:36.506 19:21:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:36.506 19:21:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:36.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:36.506 19:21:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:36.506 19:21:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:36.506 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:36.506 19:21:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:36.506 19:21:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:36.506 19:21:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.506 19:21:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:36.506 19:21:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.506 19:21:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:36.506 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:36.506 19:21:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.506 19:21:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:36.506 19:21:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.506 19:21:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:36.506 19:21:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.506 19:21:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:36.506 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:36.506 19:21:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.506 19:21:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:36.506 19:21:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:36.506 19:21:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:36.506 19:21:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:36.506 19:21:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.506 19:21:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.506 19:21:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.506 19:21:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:36.506 19:21:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.506 19:21:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.506 19:21:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:36.506 19:21:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.506 19:21:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.506 19:21:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:36.506 19:21:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:36.506 19:21:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.506 19:21:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.765 19:21:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.765 19:21:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.765 19:21:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:36.765 19:21:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.765 19:21:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.765 19:21:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.765 19:21:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:36.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:12:36.765 00:12:36.765 --- 10.0.0.2 ping statistics --- 00:12:36.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.765 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:12:36.765 19:21:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:12:36.765 00:12:36.765 --- 10.0.0.1 ping statistics --- 00:12:36.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.765 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:36.765 19:21:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.765 19:21:34 -- nvmf/common.sh@410 -- # return 0 00:12:36.765 19:21:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:36.765 19:21:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.765 19:21:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:36.765 19:21:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:36.765 19:21:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.765 19:21:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:36.765 19:21:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:36.765 19:21:34 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:36.765 19:21:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:36.765 19:21:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:36.766 19:21:34 -- common/autotest_common.sh@10 -- # set +x 00:12:36.766 19:21:34 -- nvmf/common.sh@469 -- # nvmfpid=1142206 00:12:36.766 19:21:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.766 19:21:34 -- nvmf/common.sh@470 -- # waitforlisten 1142206 00:12:36.766 19:21:34 -- common/autotest_common.sh@829 -- # '[' -z 1142206 ']' 00:12:36.766 19:21:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.766 19:21:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.766 19:21:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.766 19:21:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.766 19:21:34 -- common/autotest_common.sh@10 -- # set +x 00:12:36.766 [2024-11-17 19:21:34.924412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:36.766 [2024-11-17 19:21:34.924486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.766 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.766 [2024-11-17 19:21:34.991313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.024 [2024-11-17 19:21:35.083853] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.024 [2024-11-17 19:21:35.084020] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.024 [2024-11-17 19:21:35.084039] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.024 [2024-11-17 19:21:35.084052] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.024 [2024-11-17 19:21:35.084114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.024 [2024-11-17 19:21:35.084168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.024 [2024-11-17 19:21:35.084215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.024 [2024-11-17 19:21:35.084218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.957 19:21:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.957 19:21:35 -- common/autotest_common.sh@862 -- # return 0 00:12:37.957 19:21:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:37.957 19:21:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:37.957 19:21:35 -- common/autotest_common.sh@10 -- # set +x 00:12:37.957 19:21:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.957 19:21:35 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:37.957 19:21:35 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.957 19:21:35 -- target/multitarget.sh@21 -- # jq length 00:12:37.957 19:21:36 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:37.957 19:21:36 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:37.957 "nvmf_tgt_1" 00:12:37.957 19:21:36 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:38.214 "nvmf_tgt_2" 00:12:38.214 19:21:36 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.214 19:21:36 -- target/multitarget.sh@28 -- # jq length 00:12:38.214 19:21:36 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:38.214 19:21:36 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:38.472 true 00:12:38.472 19:21:36 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:38.472 true 00:12:38.472 19:21:36 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.472 19:21:36 -- target/multitarget.sh@35 -- # jq length 00:12:38.472 19:21:36 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:38.472 19:21:36 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:38.472 19:21:36 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:38.472 19:21:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:38.472 19:21:36 -- nvmf/common.sh@116 -- # sync 00:12:38.472 19:21:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:38.472 19:21:36 -- nvmf/common.sh@119 -- # set +e 00:12:38.472 19:21:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:38.472 19:21:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:38.472 rmmod nvme_tcp 00:12:38.730 rmmod nvme_fabrics 00:12:38.730 rmmod nvme_keyring 00:12:38.730 19:21:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:38.730 19:21:36 -- nvmf/common.sh@123 -- # set -e 00:12:38.730 19:21:36 -- nvmf/common.sh@124 -- # return 0 00:12:38.730 19:21:36 -- nvmf/common.sh@477 -- # '[' -n 1142206 ']' 00:12:38.730 19:21:36 -- nvmf/common.sh@478 -- # killprocess 1142206 00:12:38.730 19:21:36 -- common/autotest_common.sh@936 -- # '[' -z 1142206 ']' 00:12:38.730 19:21:36 -- common/autotest_common.sh@940 -- # kill -0 1142206 00:12:38.730 19:21:36 -- common/autotest_common.sh@941 -- # uname 00:12:38.730 19:21:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:38.730 19:21:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1142206 00:12:38.730 19:21:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:38.730 19:21:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:38.730 19:21:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1142206' 00:12:38.730 killing process with pid 1142206 00:12:38.730 19:21:36 -- common/autotest_common.sh@955 -- # kill 1142206 00:12:38.730 19:21:36 -- common/autotest_common.sh@960 -- # wait 1142206 00:12:38.989 19:21:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:38.989 19:21:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:38.989 19:21:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:38.989 19:21:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.989 19:21:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:38.989 19:21:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.989 19:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.989 19:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.894 19:21:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:40.894 00:12:40.894 real 0m6.475s 00:12:40.894 user 0m9.551s 00:12:40.894 sys 0m1.914s 00:12:40.894 19:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:40.894 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:40.894 ************************************ 00:12:40.894 END TEST nvmf_multitarget 00:12:40.894 ************************************ 00:12:40.894 19:21:39 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.894 19:21:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:40.894 19:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.894 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:40.894 ************************************ 00:12:40.894 START TEST nvmf_rpc 00:12:40.894 ************************************ 00:12:40.894 19:21:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.894 * Looking for test storage... 00:12:40.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.894 19:21:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:40.894 19:21:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:40.894 19:21:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:41.153 19:21:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:41.153 19:21:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:41.153 19:21:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:41.153 19:21:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:41.153 19:21:39 -- scripts/common.sh@335 -- # IFS=.-: 00:12:41.153 19:21:39 -- scripts/common.sh@335 -- # read -ra ver1 00:12:41.153 19:21:39 -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.153 19:21:39 -- scripts/common.sh@336 -- # read -ra ver2 00:12:41.153 19:21:39 -- scripts/common.sh@337 -- # local 'op=<' 00:12:41.153 19:21:39 -- scripts/common.sh@339 -- # ver1_l=2 00:12:41.153 19:21:39 -- scripts/common.sh@340 -- # ver2_l=1 00:12:41.153 19:21:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:41.153 19:21:39 -- scripts/common.sh@343 -- # case "$op" in 00:12:41.153 19:21:39 -- scripts/common.sh@344 -- # : 1 00:12:41.153 19:21:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:41.153 19:21:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.153 19:21:39 -- scripts/common.sh@364 -- # decimal 1 00:12:41.153 19:21:39 -- scripts/common.sh@352 -- # local d=1 00:12:41.153 19:21:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.153 19:21:39 -- scripts/common.sh@354 -- # echo 1 00:12:41.153 19:21:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:41.153 19:21:39 -- scripts/common.sh@365 -- # decimal 2 00:12:41.153 19:21:39 -- scripts/common.sh@352 -- # local d=2 00:12:41.153 19:21:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.153 19:21:39 -- scripts/common.sh@354 -- # echo 2 00:12:41.153 19:21:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:41.153 19:21:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:41.153 19:21:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:41.153 19:21:39 -- scripts/common.sh@367 -- # return 0 00:12:41.153 19:21:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.153 19:21:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:41.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.153 --rc genhtml_branch_coverage=1 00:12:41.153 --rc genhtml_function_coverage=1 00:12:41.153 --rc genhtml_legend=1 00:12:41.153 --rc geninfo_all_blocks=1 00:12:41.153 --rc geninfo_unexecuted_blocks=1 00:12:41.153 00:12:41.153 ' 00:12:41.153 19:21:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:41.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.153 --rc genhtml_branch_coverage=1 00:12:41.153 --rc genhtml_function_coverage=1 00:12:41.153 --rc genhtml_legend=1 00:12:41.153 --rc geninfo_all_blocks=1 00:12:41.153 --rc geninfo_unexecuted_blocks=1 00:12:41.153 00:12:41.153 ' 00:12:41.153 19:21:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:41.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.153 --rc genhtml_branch_coverage=1 00:12:41.153 --rc genhtml_function_coverage=1 00:12:41.153 --rc genhtml_legend=1 00:12:41.153 --rc geninfo_all_blocks=1 00:12:41.153 --rc geninfo_unexecuted_blocks=1 00:12:41.153 00:12:41.153 ' 00:12:41.153 19:21:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:41.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.153 --rc genhtml_branch_coverage=1 00:12:41.153 --rc genhtml_function_coverage=1 00:12:41.153 --rc genhtml_legend=1 00:12:41.153 --rc geninfo_all_blocks=1 00:12:41.153 --rc geninfo_unexecuted_blocks=1 00:12:41.153 00:12:41.153 ' 00:12:41.153 19:21:39 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.153 19:21:39 -- nvmf/common.sh@7 -- # uname -s 00:12:41.153 19:21:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.153 19:21:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.153 19:21:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.153 19:21:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.153 19:21:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.153 19:21:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.153 19:21:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.153 19:21:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.153 19:21:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.153 19:21:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.153 19:21:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.153 19:21:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.153 19:21:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.153 19:21:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.153 19:21:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.153 19:21:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.153 19:21:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.153 19:21:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.153 19:21:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.153 19:21:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.153 19:21:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.153 19:21:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.153 19:21:39 -- paths/export.sh@5 -- # export PATH 00:12:41.153 19:21:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.153 19:21:39 -- nvmf/common.sh@46 -- # : 0 00:12:41.153 19:21:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:41.153 19:21:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:41.153 19:21:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:41.153 19:21:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.153 19:21:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.153 19:21:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:41.153 19:21:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:41.153 19:21:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:41.153 19:21:39 -- target/rpc.sh@11 -- # loops=5 00:12:41.153 19:21:39 -- target/rpc.sh@23 -- # nvmftestinit 00:12:41.153 19:21:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:41.153 19:21:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.153 19:21:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:41.153 19:21:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:41.153 19:21:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:41.153 19:21:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.154 19:21:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.154 19:21:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.154 19:21:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:41.154 19:21:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:41.154 19:21:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:41.154 19:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:43.684 19:21:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:43.684 19:21:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:43.684 19:21:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:43.684 19:21:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:43.684 19:21:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:43.684 19:21:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:43.684 19:21:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:43.684 19:21:41 -- nvmf/common.sh@294 -- # net_devs=() 00:12:43.684 19:21:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:43.684 19:21:41 -- nvmf/common.sh@295 -- # e810=() 00:12:43.684 19:21:41 -- nvmf/common.sh@295 -- # local -ga e810 00:12:43.684 19:21:41 -- nvmf/common.sh@296 -- # x722=() 00:12:43.684 19:21:41 -- nvmf/common.sh@296 -- # local -ga x722 00:12:43.684 19:21:41 -- nvmf/common.sh@297 -- # mlx=() 00:12:43.684 19:21:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:43.684 19:21:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.684 19:21:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:43.684 19:21:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:43.684 19:21:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:43.684 19:21:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:43.684 19:21:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:43.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:43.684 19:21:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:43.684 19:21:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:43.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:43.684 19:21:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:43.684 19:21:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:43.684 19:21:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:43.684 19:21:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.684 19:21:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:43.684 19:21:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.684 19:21:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:43.684 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:43.685 19:21:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.685 19:21:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:43.685 19:21:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.685 19:21:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:43.685 19:21:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.685 19:21:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:43.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:43.685 19:21:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.685 19:21:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:43.685 19:21:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:43.685 19:21:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:43.685 19:21:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:43.685 19:21:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:43.685 19:21:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.685 19:21:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.685 19:21:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.685 19:21:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:43.685 19:21:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.685 19:21:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.685 19:21:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:43.685 19:21:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.685 19:21:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.685 19:21:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:43.685 19:21:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:43.685 19:21:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.685 19:21:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.685 19:21:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.685 19:21:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.685 19:21:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:43.685 19:21:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.685 19:21:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.685 19:21:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.685 19:21:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:43.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:12:43.685 00:12:43.685 --- 10.0.0.2 ping statistics --- 00:12:43.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.685 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:12:43.685 19:21:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:12:43.685 00:12:43.685 --- 10.0.0.1 ping statistics --- 00:12:43.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.685 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:43.685 19:21:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.685 19:21:41 -- nvmf/common.sh@410 -- # return 0 00:12:43.685 19:21:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:43.685 19:21:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.685 19:21:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:43.685 19:21:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:43.685 19:21:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.685 19:21:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:43.685 19:21:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:43.685 19:21:41 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:43.685 19:21:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:43.685 19:21:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:43.685 19:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:43.685 19:21:41 -- nvmf/common.sh@469 -- # nvmfpid=1144462 00:12:43.685 19:21:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.685 19:21:41 -- nvmf/common.sh@470 -- # waitforlisten 1144462 00:12:43.685 19:21:41 -- common/autotest_common.sh@829 -- # '[' -z 1144462 ']' 00:12:43.685 19:21:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.685 19:21:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.685 19:21:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.685 19:21:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.685 19:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:43.685 [2024-11-17 19:21:41.612773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:43.685 [2024-11-17 19:21:41.612864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.685 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.685 [2024-11-17 19:21:41.682144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.685 [2024-11-17 19:21:41.769211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:43.685 [2024-11-17 19:21:41.769363] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.685 [2024-11-17 19:21:41.769396] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.685 [2024-11-17 19:21:41.769409] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.685 [2024-11-17 19:21:41.769476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.685 [2024-11-17 19:21:41.769549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.685 [2024-11-17 19:21:41.769598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.685 [2024-11-17 19:21:41.769601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.618 19:21:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.618 19:21:42 -- common/autotest_common.sh@862 -- # return 0 00:12:44.618 19:21:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:44.618 19:21:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:44.618 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.618 19:21:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.618 19:21:42 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:44.618 19:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.618 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.618 19:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.618 19:21:42 -- target/rpc.sh@26 -- # stats='{ 00:12:44.618 "tick_rate": 2700000000, 00:12:44.618 "poll_groups": [ 00:12:44.618 { 00:12:44.618 "name": "nvmf_tgt_poll_group_0", 00:12:44.618 "admin_qpairs": 0, 00:12:44.618 "io_qpairs": 0, 00:12:44.618 "current_admin_qpairs": 0, 00:12:44.618 "current_io_qpairs": 0, 00:12:44.618 "pending_bdev_io": 0, 00:12:44.618 "completed_nvme_io": 0, 00:12:44.618 "transports": [] 00:12:44.618 }, 00:12:44.618 { 00:12:44.618 "name": "nvmf_tgt_poll_group_1", 00:12:44.618 "admin_qpairs": 0, 00:12:44.618 "io_qpairs": 0, 00:12:44.618 "current_admin_qpairs": 0, 00:12:44.618 "current_io_qpairs": 0, 00:12:44.618 "pending_bdev_io": 0, 00:12:44.618 "completed_nvme_io": 0, 00:12:44.618 "transports": [] 00:12:44.618 }, 00:12:44.618 { 00:12:44.618 "name": "nvmf_tgt_poll_group_2", 00:12:44.618 "admin_qpairs": 0, 00:12:44.618 "io_qpairs": 0, 00:12:44.618 "current_admin_qpairs": 0, 00:12:44.618 "current_io_qpairs": 0, 00:12:44.618 "pending_bdev_io": 0, 00:12:44.618 "completed_nvme_io": 0, 00:12:44.618 "transports": [] 00:12:44.618 }, 00:12:44.618 { 00:12:44.618 "name": "nvmf_tgt_poll_group_3", 00:12:44.618 "admin_qpairs": 0, 00:12:44.618 "io_qpairs": 0, 00:12:44.618 "current_admin_qpairs": 0, 00:12:44.618 "current_io_qpairs": 0, 00:12:44.618 "pending_bdev_io": 0, 00:12:44.618 "completed_nvme_io": 0, 00:12:44.618 "transports": [] 00:12:44.618 } 00:12:44.618 ] 00:12:44.618 }' 00:12:44.618 19:21:42 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:44.618 19:21:42 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:44.618 19:21:42 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:44.618 19:21:42 -- target/rpc.sh@15 -- # wc -l 00:12:44.618 19:21:42 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:44.618 19:21:42 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:44.618 19:21:42 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:44.618 19:21:42 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.618 19:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.618 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.618 [2024-11-17 19:21:42.710568] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.618 19:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.618 19:21:42 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:44.618 19:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.618 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.618 19:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.618 19:21:42 -- target/rpc.sh@33 -- # stats='{ 00:12:44.618 "tick_rate": 2700000000, 00:12:44.618 "poll_groups": [ 00:12:44.618 { 00:12:44.618 "name": "nvmf_tgt_poll_group_0", 00:12:44.618 "admin_qpairs": 0, 00:12:44.618 "io_qpairs": 0, 00:12:44.618 "current_admin_qpairs": 0, 00:12:44.618 "current_io_qpairs": 0, 00:12:44.618 "pending_bdev_io": 0, 00:12:44.618 "completed_nvme_io": 0, 00:12:44.618 "transports": [ 00:12:44.618 { 00:12:44.618 "trtype": "TCP" 00:12:44.618 } 00:12:44.618 ] 00:12:44.618 }, 00:12:44.618 { 00:12:44.618 "name": "nvmf_tgt_poll_group_1", 00:12:44.618 "admin_qpairs": 0, 00:12:44.618 "io_qpairs": 0, 00:12:44.618 "current_admin_qpairs": 0, 00:12:44.618 "current_io_qpairs": 0, 00:12:44.618 "pending_bdev_io": 0, 00:12:44.618 "completed_nvme_io": 0, 00:12:44.618 "transports": [ 00:12:44.618 { 00:12:44.618 "trtype": "TCP" 00:12:44.618 } 00:12:44.618 ] 00:12:44.618 }, 00:12:44.618 { 00:12:44.618 "name": "nvmf_tgt_poll_group_2", 00:12:44.618 "admin_qpairs": 0, 00:12:44.618 "io_qpairs": 0, 00:12:44.618 "current_admin_qpairs": 0, 00:12:44.618 "current_io_qpairs": 0, 00:12:44.618 "pending_bdev_io": 0, 00:12:44.618 "completed_nvme_io": 0, 00:12:44.618 "transports": [ 00:12:44.618 { 00:12:44.618 "trtype": "TCP" 00:12:44.618 } 00:12:44.618 ] 00:12:44.618 }, 00:12:44.618 { 00:12:44.618 "name": "nvmf_tgt_poll_group_3", 00:12:44.618 "admin_qpairs": 0, 00:12:44.618 "io_qpairs": 0, 00:12:44.618 "current_admin_qpairs": 0, 00:12:44.618 "current_io_qpairs": 0, 00:12:44.618 "pending_bdev_io": 0, 00:12:44.618 "completed_nvme_io": 0, 00:12:44.618 "transports": [ 00:12:44.618 { 00:12:44.618 "trtype": "TCP" 00:12:44.618 } 00:12:44.618 ] 00:12:44.618 } 00:12:44.618 ] 00:12:44.618 }' 00:12:44.618 19:21:42 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:44.618 19:21:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:44.618 19:21:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:44.618 19:21:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.618 19:21:42 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:44.618 19:21:42 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:44.618 19:21:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:44.618 19:21:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:44.618 19:21:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.618 19:21:42 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:44.618 19:21:42 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:44.618 19:21:42 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:44.618 19:21:42 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:44.618 19:21:42 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:44.618 19:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.618 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.618 Malloc1 00:12:44.618 19:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.618 19:21:42 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.618 19:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.618 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.618 19:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.618 19:21:42 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.618 19:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.618 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.618 19:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.618 19:21:42 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:44.618 19:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.618 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.618 19:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.618 19:21:42 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.618 19:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.618 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.618 [2024-11-17 19:21:42.863188] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.618 19:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.619 19:21:42 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:44.619 19:21:42 -- common/autotest_common.sh@650 -- # local es=0 00:12:44.619 19:21:42 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:44.619 19:21:42 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:44.619 19:21:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.619 19:21:42 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:44.619 19:21:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.619 19:21:42 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:44.619 19:21:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.619 19:21:42 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:44.619 19:21:42 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:44.619 19:21:42 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:44.876 [2024-11-17 19:21:42.885769] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:44.876 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:44.876 could not add new controller: failed to write to nvme-fabrics device 00:12:44.876 19:21:42 -- common/autotest_common.sh@653 -- # es=1 00:12:44.876 19:21:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:44.876 19:21:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:44.876 19:21:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:44.876 19:21:42 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.876 19:21:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.876 19:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:44.876 19:21:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.876 19:21:42 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.441 19:21:43 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.441 19:21:43 -- common/autotest_common.sh@1187 -- # local i=0 00:12:45.441 19:21:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.441 19:21:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:45.441 19:21:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:47.964 19:21:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:47.964 19:21:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:47.964 19:21:45 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.964 19:21:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:47.964 19:21:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.964 19:21:45 -- common/autotest_common.sh@1197 -- # return 0 00:12:47.964 19:21:45 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.964 19:21:45 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.964 19:21:45 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.964 19:21:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.964 19:21:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.964 19:21:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.964 19:21:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.964 19:21:45 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.964 19:21:45 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.964 19:21:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.964 19:21:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.964 19:21:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.964 19:21:45 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.964 19:21:45 -- common/autotest_common.sh@650 -- # local es=0 00:12:47.964 19:21:45 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.964 19:21:45 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:47.964 19:21:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:47.964 19:21:45 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:47.964 19:21:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:47.964 19:21:45 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:47.964 19:21:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:47.964 19:21:45 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:47.964 19:21:45 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:47.964 19:21:45 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.964 [2024-11-17 19:21:45.725118] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:47.964 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:47.964 could not add new controller: failed to write to nvme-fabrics device 00:12:47.964 19:21:45 -- common/autotest_common.sh@653 -- # es=1 00:12:47.964 19:21:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:47.964 19:21:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:47.964 19:21:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:47.964 19:21:45 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:47.964 19:21:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.964 19:21:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.964 19:21:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.964 19:21:45 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.222 19:21:46 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.222 19:21:46 -- common/autotest_common.sh@1187 -- # local i=0 00:12:48.222 19:21:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.222 19:21:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:48.222 19:21:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:50.747 19:21:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:50.747 19:21:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:50.747 19:21:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.747 19:21:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:50.747 19:21:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.747 19:21:48 -- common/autotest_common.sh@1197 -- # return 0 00:12:50.747 19:21:48 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.747 19:21:48 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.747 19:21:48 -- common/autotest_common.sh@1208 -- # local i=0 00:12:50.747 19:21:48 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:50.747 19:21:48 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.747 19:21:48 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:50.747 19:21:48 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.747 19:21:48 -- common/autotest_common.sh@1220 -- # return 0 00:12:50.747 19:21:48 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.747 19:21:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.747 19:21:48 -- common/autotest_common.sh@10 -- # set +x 00:12:50.747 19:21:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.747 19:21:48 -- target/rpc.sh@81 -- # seq 1 5 00:12:50.747 19:21:48 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.747 19:21:48 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.747 19:21:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.747 19:21:48 -- common/autotest_common.sh@10 -- # set +x 00:12:50.747 19:21:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.747 19:21:48 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.747 19:21:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.747 19:21:48 -- common/autotest_common.sh@10 -- # set +x 00:12:50.747 [2024-11-17 19:21:48.577671] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.747 19:21:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.747 19:21:48 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.747 19:21:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.747 19:21:48 -- common/autotest_common.sh@10 -- # set +x 00:12:50.747 19:21:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.747 19:21:48 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.747 19:21:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.747 19:21:48 -- common/autotest_common.sh@10 -- # set +x 00:12:50.747 19:21:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.747 19:21:48 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.020 19:21:49 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.020 19:21:49 -- common/autotest_common.sh@1187 -- # local i=0 00:12:51.020 19:21:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.020 19:21:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:51.020 19:21:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:53.600 19:21:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:53.600 19:21:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:53.600 19:21:51 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.600 19:21:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:53.600 19:21:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.600 19:21:51 -- common/autotest_common.sh@1197 -- # return 0 00:12:53.600 19:21:51 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.600 19:21:51 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.600 19:21:51 -- common/autotest_common.sh@1208 -- # local i=0 00:12:53.600 19:21:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:53.600 19:21:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.600 19:21:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:53.600 19:21:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.600 19:21:51 -- common/autotest_common.sh@1220 -- # return 0 00:12:53.600 19:21:51 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.600 19:21:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.600 19:21:51 -- common/autotest_common.sh@10 -- # set +x 00:12:53.600 19:21:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.600 19:21:51 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.600 19:21:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.600 19:21:51 -- common/autotest_common.sh@10 -- # set +x 00:12:53.600 19:21:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.600 19:21:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.600 19:21:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.600 19:21:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.600 19:21:51 -- common/autotest_common.sh@10 -- # set +x 00:12:53.600 19:21:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.600 19:21:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.600 19:21:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.600 19:21:51 -- common/autotest_common.sh@10 -- # set +x 00:12:53.600 [2024-11-17 19:21:51.403921] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.600 19:21:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.600 19:21:51 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.600 19:21:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.600 19:21:51 -- common/autotest_common.sh@10 -- # set +x 00:12:53.600 19:21:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.600 19:21:51 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.600 19:21:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.600 19:21:51 -- common/autotest_common.sh@10 -- # set +x 00:12:53.600 19:21:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.600 19:21:51 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.858 19:21:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.858 19:21:52 -- common/autotest_common.sh@1187 -- # local i=0 00:12:53.858 19:21:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.858 19:21:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:53.858 19:21:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:56.387 19:21:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:56.387 19:21:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:56.387 19:21:54 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.387 19:21:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:56.387 19:21:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.387 19:21:54 -- common/autotest_common.sh@1197 -- # return 0 00:12:56.387 19:21:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.387 19:21:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.387 19:21:54 -- common/autotest_common.sh@1208 -- # local i=0 00:12:56.387 19:21:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:56.387 19:21:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.387 19:21:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:56.387 19:21:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.387 19:21:54 -- common/autotest_common.sh@1220 -- # return 0 00:12:56.387 19:21:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.387 19:21:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.387 19:21:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 19:21:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.387 19:21:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.387 19:21:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.387 19:21:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 19:21:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.387 19:21:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.387 19:21:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.387 19:21:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.387 19:21:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 19:21:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.387 19:21:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.387 19:21:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.387 19:21:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 [2024-11-17 19:21:54.150314] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.387 19:21:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.387 19:21:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.387 19:21:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.387 19:21:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 19:21:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.387 19:21:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.387 19:21:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.387 19:21:54 -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 19:21:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.387 19:21:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.645 19:21:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.646 19:21:54 -- common/autotest_common.sh@1187 -- # local i=0 00:12:56.646 19:21:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.646 19:21:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:56.646 19:21:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:59.172 19:21:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:59.172 19:21:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:59.172 19:21:56 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.172 19:21:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:59.172 19:21:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.172 19:21:56 -- common/autotest_common.sh@1197 -- # return 0 00:12:59.172 19:21:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.172 19:21:56 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.172 19:21:56 -- common/autotest_common.sh@1208 -- # local i=0 00:12:59.172 19:21:56 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:59.172 19:21:56 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.172 19:21:56 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:59.172 19:21:56 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.172 19:21:56 -- common/autotest_common.sh@1220 -- # return 0 00:12:59.172 19:21:56 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.172 19:21:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.172 19:21:56 -- common/autotest_common.sh@10 -- # set +x 00:12:59.172 19:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.172 19:21:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.172 19:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.172 19:21:57 -- common/autotest_common.sh@10 -- # set +x 00:12:59.172 19:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.172 19:21:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.172 19:21:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.172 19:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.172 19:21:57 -- common/autotest_common.sh@10 -- # set +x 00:12:59.172 19:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.172 19:21:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.172 19:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.172 19:21:57 -- common/autotest_common.sh@10 -- # set +x 00:12:59.172 [2024-11-17 19:21:57.028544] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.172 19:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.173 19:21:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.173 19:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.173 19:21:57 -- common/autotest_common.sh@10 -- # set +x 00:12:59.173 19:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.173 19:21:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.173 19:21:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.173 19:21:57 -- common/autotest_common.sh@10 -- # set +x 00:12:59.173 19:21:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.173 19:21:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.738 19:21:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.738 19:21:57 -- common/autotest_common.sh@1187 -- # local i=0 00:12:59.738 19:21:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.738 19:21:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:59.738 19:21:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:01.646 19:21:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:01.646 19:21:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:01.646 19:21:59 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.646 19:21:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:01.646 19:21:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.646 19:21:59 -- common/autotest_common.sh@1197 -- # return 0 00:13:01.646 19:21:59 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.646 19:21:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.646 19:21:59 -- common/autotest_common.sh@1208 -- # local i=0 00:13:01.646 19:21:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:01.646 19:21:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.646 19:21:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:01.646 19:21:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.646 19:21:59 -- common/autotest_common.sh@1220 -- # return 0 00:13:01.646 19:21:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.646 19:21:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.646 19:21:59 -- common/autotest_common.sh@10 -- # set +x 00:13:01.646 19:21:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.646 19:21:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.646 19:21:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.646 19:21:59 -- common/autotest_common.sh@10 -- # set +x 00:13:01.646 19:21:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.646 19:21:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.646 19:21:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.646 19:21:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.646 19:21:59 -- common/autotest_common.sh@10 -- # set +x 00:13:01.646 19:21:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.646 19:21:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.646 19:21:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.646 19:21:59 -- common/autotest_common.sh@10 -- # set +x 00:13:01.646 [2024-11-17 19:21:59.881180] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.646 19:21:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.646 19:21:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.646 19:21:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.646 19:21:59 -- common/autotest_common.sh@10 -- # set +x 00:13:01.646 19:21:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.646 19:21:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.646 19:21:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.646 19:21:59 -- common/autotest_common.sh@10 -- # set +x 00:13:01.646 19:21:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.646 19:21:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.580 19:22:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.580 19:22:00 -- common/autotest_common.sh@1187 -- # local i=0 00:13:02.580 19:22:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.580 19:22:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:02.580 19:22:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:04.476 19:22:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:04.476 19:22:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:04.476 19:22:02 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.476 19:22:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:04.476 19:22:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.476 19:22:02 -- common/autotest_common.sh@1197 -- # return 0 00:13:04.476 19:22:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.476 19:22:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.476 19:22:02 -- common/autotest_common.sh@1208 -- # local i=0 00:13:04.476 19:22:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:04.476 19:22:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.476 19:22:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:04.476 19:22:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.476 19:22:02 -- common/autotest_common.sh@1220 -- # return 0 00:13:04.476 19:22:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.476 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.476 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.476 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.476 19:22:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.476 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.476 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.734 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.734 19:22:02 -- target/rpc.sh@99 -- # seq 1 5 00:13:04.734 19:22:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.734 19:22:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.734 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.734 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.734 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.734 19:22:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.734 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.734 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.734 [2024-11-17 19:22:02.761441] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.734 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.734 19:22:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.734 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.734 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.734 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.734 19:22:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.734 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.734 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.734 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.734 19:22:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.734 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.734 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.734 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.734 19:22:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.734 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.735 19:22:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 [2024-11-17 19:22:02.809512] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.735 19:22:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 [2024-11-17 19:22:02.857718] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.735 19:22:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 [2024-11-17 19:22:02.905882] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.735 19:22:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 [2024-11-17 19:22:02.954083] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 19:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.735 19:22:02 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:04.735 19:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.735 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.992 19:22:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.992 19:22:03 -- target/rpc.sh@110 -- # stats='{ 00:13:04.992 "tick_rate": 2700000000, 00:13:04.992 "poll_groups": [ 00:13:04.992 { 00:13:04.992 "name": "nvmf_tgt_poll_group_0", 00:13:04.992 "admin_qpairs": 2, 00:13:04.992 "io_qpairs": 84, 00:13:04.992 "current_admin_qpairs": 0, 00:13:04.992 "current_io_qpairs": 0, 00:13:04.992 "pending_bdev_io": 0, 00:13:04.992 "completed_nvme_io": 170, 00:13:04.992 "transports": [ 00:13:04.992 { 00:13:04.992 "trtype": "TCP" 00:13:04.992 } 00:13:04.992 ] 00:13:04.992 }, 00:13:04.992 { 00:13:04.992 "name": "nvmf_tgt_poll_group_1", 00:13:04.992 "admin_qpairs": 2, 00:13:04.992 "io_qpairs": 84, 00:13:04.992 "current_admin_qpairs": 0, 00:13:04.992 "current_io_qpairs": 0, 00:13:04.992 "pending_bdev_io": 0, 00:13:04.992 "completed_nvme_io": 197, 00:13:04.992 "transports": [ 00:13:04.992 { 00:13:04.992 "trtype": "TCP" 00:13:04.992 } 00:13:04.992 ] 00:13:04.992 }, 00:13:04.992 { 00:13:04.992 "name": "nvmf_tgt_poll_group_2", 00:13:04.992 "admin_qpairs": 1, 00:13:04.992 "io_qpairs": 84, 00:13:04.992 "current_admin_qpairs": 0, 00:13:04.992 "current_io_qpairs": 0, 00:13:04.992 "pending_bdev_io": 0, 00:13:04.992 "completed_nvme_io": 170, 00:13:04.992 "transports": [ 00:13:04.992 { 00:13:04.992 "trtype": "TCP" 00:13:04.992 } 00:13:04.992 ] 00:13:04.992 }, 00:13:04.992 { 00:13:04.992 "name": "nvmf_tgt_poll_group_3", 00:13:04.992 "admin_qpairs": 2, 00:13:04.992 "io_qpairs": 84, 00:13:04.992 "current_admin_qpairs": 0, 00:13:04.992 "current_io_qpairs": 0, 00:13:04.992 "pending_bdev_io": 0, 00:13:04.992 "completed_nvme_io": 149, 00:13:04.992 "transports": [ 00:13:04.992 { 00:13:04.992 "trtype": "TCP" 00:13:04.992 } 00:13:04.992 ] 00:13:04.992 } 00:13:04.992 ] 00:13:04.992 }' 00:13:04.992 19:22:03 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:04.992 19:22:03 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:04.992 19:22:03 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:04.992 19:22:03 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.992 19:22:03 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:04.992 19:22:03 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:04.992 19:22:03 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:04.992 19:22:03 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:04.992 19:22:03 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.992 19:22:03 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:04.992 19:22:03 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:04.992 19:22:03 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:04.992 19:22:03 -- target/rpc.sh@123 -- # nvmftestfini 00:13:04.992 19:22:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:04.992 19:22:03 -- nvmf/common.sh@116 -- # sync 00:13:04.992 19:22:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:04.992 19:22:03 -- nvmf/common.sh@119 -- # set +e 00:13:04.992 19:22:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:04.992 19:22:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:04.992 rmmod nvme_tcp 00:13:04.992 rmmod nvme_fabrics 00:13:04.992 rmmod nvme_keyring 00:13:04.992 19:22:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:04.992 19:22:03 -- nvmf/common.sh@123 -- # set -e 00:13:04.992 19:22:03 -- nvmf/common.sh@124 -- # return 0 00:13:04.992 19:22:03 -- nvmf/common.sh@477 -- # '[' -n 1144462 ']' 00:13:04.992 19:22:03 -- nvmf/common.sh@478 -- # killprocess 1144462 00:13:04.992 19:22:03 -- common/autotest_common.sh@936 -- # '[' -z 1144462 ']' 00:13:04.992 19:22:03 -- common/autotest_common.sh@940 -- # kill -0 1144462 00:13:04.992 19:22:03 -- common/autotest_common.sh@941 -- # uname 00:13:04.992 19:22:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.992 19:22:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1144462 00:13:04.992 19:22:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:04.993 19:22:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:04.993 19:22:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1144462' 00:13:04.993 killing process with pid 1144462 00:13:04.993 19:22:03 -- common/autotest_common.sh@955 -- # kill 1144462 00:13:04.993 19:22:03 -- common/autotest_common.sh@960 -- # wait 1144462 00:13:05.253 19:22:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:05.253 19:22:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:05.253 19:22:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:05.253 19:22:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.253 19:22:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:05.253 19:22:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.253 19:22:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.253 19:22:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.780 19:22:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:07.780 00:13:07.780 real 0m26.372s 00:13:07.780 user 1m26.007s 00:13:07.780 sys 0m4.448s 00:13:07.780 19:22:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:07.780 19:22:05 -- common/autotest_common.sh@10 -- # set +x 00:13:07.780 ************************************ 00:13:07.780 END TEST nvmf_rpc 00:13:07.780 ************************************ 00:13:07.780 19:22:05 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:07.780 19:22:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:07.780 19:22:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.780 19:22:05 -- common/autotest_common.sh@10 -- # set +x 00:13:07.780 ************************************ 00:13:07.780 START TEST nvmf_invalid 00:13:07.780 ************************************ 00:13:07.780 19:22:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:07.780 * Looking for test storage... 00:13:07.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.781 19:22:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:07.781 19:22:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:07.781 19:22:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:07.781 19:22:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:07.781 19:22:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:07.781 19:22:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:07.781 19:22:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:07.781 19:22:05 -- scripts/common.sh@335 -- # IFS=.-: 00:13:07.781 19:22:05 -- scripts/common.sh@335 -- # read -ra ver1 00:13:07.781 19:22:05 -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.781 19:22:05 -- scripts/common.sh@336 -- # read -ra ver2 00:13:07.781 19:22:05 -- scripts/common.sh@337 -- # local 'op=<' 00:13:07.781 19:22:05 -- scripts/common.sh@339 -- # ver1_l=2 00:13:07.781 19:22:05 -- scripts/common.sh@340 -- # ver2_l=1 00:13:07.781 19:22:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:07.781 19:22:05 -- scripts/common.sh@343 -- # case "$op" in 00:13:07.781 19:22:05 -- scripts/common.sh@344 -- # : 1 00:13:07.781 19:22:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:07.781 19:22:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.781 19:22:05 -- scripts/common.sh@364 -- # decimal 1 00:13:07.781 19:22:05 -- scripts/common.sh@352 -- # local d=1 00:13:07.781 19:22:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.781 19:22:05 -- scripts/common.sh@354 -- # echo 1 00:13:07.781 19:22:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:07.781 19:22:05 -- scripts/common.sh@365 -- # decimal 2 00:13:07.781 19:22:05 -- scripts/common.sh@352 -- # local d=2 00:13:07.781 19:22:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.781 19:22:05 -- scripts/common.sh@354 -- # echo 2 00:13:07.781 19:22:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:07.781 19:22:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:07.781 19:22:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:07.781 19:22:05 -- scripts/common.sh@367 -- # return 0 00:13:07.781 19:22:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.781 19:22:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:07.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.781 --rc genhtml_branch_coverage=1 00:13:07.781 --rc genhtml_function_coverage=1 00:13:07.781 --rc genhtml_legend=1 00:13:07.781 --rc geninfo_all_blocks=1 00:13:07.781 --rc geninfo_unexecuted_blocks=1 00:13:07.781 00:13:07.781 ' 00:13:07.781 19:22:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:07.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.781 --rc genhtml_branch_coverage=1 00:13:07.781 --rc genhtml_function_coverage=1 00:13:07.781 --rc genhtml_legend=1 00:13:07.781 --rc geninfo_all_blocks=1 00:13:07.781 --rc geninfo_unexecuted_blocks=1 00:13:07.781 00:13:07.781 ' 00:13:07.781 19:22:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:07.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.781 --rc genhtml_branch_coverage=1 00:13:07.781 --rc genhtml_function_coverage=1 00:13:07.781 --rc genhtml_legend=1 00:13:07.781 --rc geninfo_all_blocks=1 00:13:07.781 --rc geninfo_unexecuted_blocks=1 00:13:07.781 00:13:07.781 ' 00:13:07.781 19:22:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:07.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.781 --rc genhtml_branch_coverage=1 00:13:07.781 --rc genhtml_function_coverage=1 00:13:07.781 --rc genhtml_legend=1 00:13:07.781 --rc geninfo_all_blocks=1 00:13:07.781 --rc geninfo_unexecuted_blocks=1 00:13:07.781 00:13:07.781 ' 00:13:07.781 19:22:05 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.781 19:22:05 -- nvmf/common.sh@7 -- # uname -s 00:13:07.781 19:22:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.781 19:22:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.781 19:22:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.781 19:22:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.781 19:22:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.781 19:22:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.781 19:22:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.781 19:22:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.781 19:22:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.781 19:22:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.781 19:22:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.781 19:22:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.781 19:22:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.781 19:22:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.781 19:22:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.781 19:22:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.781 19:22:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.781 19:22:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.781 19:22:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.781 19:22:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.781 19:22:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.781 19:22:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.781 19:22:05 -- paths/export.sh@5 -- # export PATH 00:13:07.781 19:22:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.781 19:22:05 -- nvmf/common.sh@46 -- # : 0 00:13:07.781 19:22:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:07.781 19:22:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:07.781 19:22:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:07.781 19:22:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.781 19:22:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.781 19:22:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:07.781 19:22:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:07.781 19:22:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:07.781 19:22:05 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:07.781 19:22:05 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.781 19:22:05 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:07.781 19:22:05 -- target/invalid.sh@14 -- # target=foobar 00:13:07.781 19:22:05 -- target/invalid.sh@16 -- # RANDOM=0 00:13:07.781 19:22:05 -- target/invalid.sh@34 -- # nvmftestinit 00:13:07.781 19:22:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:07.781 19:22:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.781 19:22:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:07.781 19:22:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:07.781 19:22:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:07.781 19:22:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.781 19:22:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.781 19:22:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.781 19:22:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:07.781 19:22:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:07.781 19:22:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:07.781 19:22:05 -- common/autotest_common.sh@10 -- # set +x 00:13:09.683 19:22:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:09.683 19:22:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:09.683 19:22:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:09.683 19:22:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:09.683 19:22:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:09.683 19:22:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:09.683 19:22:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:09.683 19:22:07 -- nvmf/common.sh@294 -- # net_devs=() 00:13:09.683 19:22:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:09.683 19:22:07 -- nvmf/common.sh@295 -- # e810=() 00:13:09.683 19:22:07 -- nvmf/common.sh@295 -- # local -ga e810 00:13:09.683 19:22:07 -- nvmf/common.sh@296 -- # x722=() 00:13:09.683 19:22:07 -- nvmf/common.sh@296 -- # local -ga x722 00:13:09.683 19:22:07 -- nvmf/common.sh@297 -- # mlx=() 00:13:09.683 19:22:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:09.683 19:22:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.683 19:22:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:09.683 19:22:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:09.683 19:22:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:09.683 19:22:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:09.683 19:22:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:09.683 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:09.683 19:22:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:09.683 19:22:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:09.683 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:09.683 19:22:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:09.683 19:22:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:09.683 19:22:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.683 19:22:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:09.683 19:22:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.683 19:22:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:09.683 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:09.683 19:22:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.683 19:22:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:09.683 19:22:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.683 19:22:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:09.683 19:22:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.683 19:22:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:09.683 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:09.683 19:22:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.683 19:22:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:09.683 19:22:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:09.683 19:22:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:09.683 19:22:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.683 19:22:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.683 19:22:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.683 19:22:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:09.683 19:22:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.683 19:22:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.683 19:22:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:09.683 19:22:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.683 19:22:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.683 19:22:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:09.683 19:22:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:09.683 19:22:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.683 19:22:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.683 19:22:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.683 19:22:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.683 19:22:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:09.683 19:22:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.683 19:22:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.683 19:22:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.683 19:22:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:09.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:13:09.683 00:13:09.683 --- 10.0.0.2 ping statistics --- 00:13:09.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.683 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:13:09.683 19:22:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:13:09.683 00:13:09.683 --- 10.0.0.1 ping statistics --- 00:13:09.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.683 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:13:09.683 19:22:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.683 19:22:07 -- nvmf/common.sh@410 -- # return 0 00:13:09.683 19:22:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:09.683 19:22:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.683 19:22:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:09.683 19:22:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.683 19:22:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:09.683 19:22:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:09.683 19:22:07 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:09.683 19:22:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:09.683 19:22:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:09.683 19:22:07 -- common/autotest_common.sh@10 -- # set +x 00:13:09.683 19:22:07 -- nvmf/common.sh@469 -- # nvmfpid=1149176 00:13:09.683 19:22:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.683 19:22:07 -- nvmf/common.sh@470 -- # waitforlisten 1149176 00:13:09.683 19:22:07 -- common/autotest_common.sh@829 -- # '[' -z 1149176 ']' 00:13:09.683 19:22:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.684 19:22:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.684 19:22:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.684 19:22:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.684 19:22:07 -- common/autotest_common.sh@10 -- # set +x 00:13:09.941 [2024-11-17 19:22:07.969543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:09.941 [2024-11-17 19:22:07.969629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.941 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.941 [2024-11-17 19:22:08.045232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.941 [2024-11-17 19:22:08.141043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:09.941 [2024-11-17 19:22:08.141216] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.941 [2024-11-17 19:22:08.141234] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.941 [2024-11-17 19:22:08.141247] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.941 [2024-11-17 19:22:08.141302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.941 [2024-11-17 19:22:08.141331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.941 [2024-11-17 19:22:08.141384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.941 [2024-11-17 19:22:08.141386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.874 19:22:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.874 19:22:08 -- common/autotest_common.sh@862 -- # return 0 00:13:10.874 19:22:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:10.874 19:22:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.874 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:13:10.874 19:22:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.874 19:22:08 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:10.874 19:22:08 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31399 00:13:11.131 [2024-11-17 19:22:09.231138] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:11.131 19:22:09 -- target/invalid.sh@40 -- # out='request: 00:13:11.131 { 00:13:11.131 "nqn": "nqn.2016-06.io.spdk:cnode31399", 00:13:11.131 "tgt_name": "foobar", 00:13:11.131 "method": "nvmf_create_subsystem", 00:13:11.131 "req_id": 1 00:13:11.131 } 00:13:11.131 Got JSON-RPC error response 00:13:11.131 response: 00:13:11.131 { 00:13:11.131 "code": -32603, 00:13:11.131 "message": "Unable to find target foobar" 00:13:11.131 }' 00:13:11.131 19:22:09 -- target/invalid.sh@41 -- # [[ request: 00:13:11.131 { 00:13:11.131 "nqn": "nqn.2016-06.io.spdk:cnode31399", 00:13:11.131 "tgt_name": "foobar", 00:13:11.131 "method": "nvmf_create_subsystem", 00:13:11.131 "req_id": 1 00:13:11.131 } 00:13:11.131 Got JSON-RPC error response 00:13:11.131 response: 00:13:11.131 { 00:13:11.131 "code": -32603, 00:13:11.131 "message": "Unable to find target foobar" 00:13:11.131 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:11.131 19:22:09 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:11.131 19:22:09 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17026 00:13:11.388 [2024-11-17 19:22:09.484000] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17026: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:11.388 19:22:09 -- target/invalid.sh@45 -- # out='request: 00:13:11.388 { 00:13:11.388 "nqn": "nqn.2016-06.io.spdk:cnode17026", 00:13:11.388 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:11.388 "method": "nvmf_create_subsystem", 00:13:11.388 "req_id": 1 00:13:11.388 } 00:13:11.388 Got JSON-RPC error response 00:13:11.388 response: 00:13:11.388 { 00:13:11.388 "code": -32602, 00:13:11.388 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:11.388 }' 00:13:11.388 19:22:09 -- target/invalid.sh@46 -- # [[ request: 00:13:11.388 { 00:13:11.388 "nqn": "nqn.2016-06.io.spdk:cnode17026", 00:13:11.388 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:11.388 "method": "nvmf_create_subsystem", 00:13:11.388 "req_id": 1 00:13:11.388 } 00:13:11.388 Got JSON-RPC error response 00:13:11.388 response: 00:13:11.388 { 00:13:11.388 "code": -32602, 00:13:11.388 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:11.388 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:11.388 19:22:09 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:11.388 19:22:09 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15582 00:13:11.647 [2024-11-17 19:22:09.740820] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15582: invalid model number 'SPDK_Controller' 00:13:11.647 19:22:09 -- target/invalid.sh@50 -- # out='request: 00:13:11.647 { 00:13:11.647 "nqn": "nqn.2016-06.io.spdk:cnode15582", 00:13:11.647 "model_number": "SPDK_Controller\u001f", 00:13:11.647 "method": "nvmf_create_subsystem", 00:13:11.647 "req_id": 1 00:13:11.647 } 00:13:11.647 Got JSON-RPC error response 00:13:11.647 response: 00:13:11.647 { 00:13:11.647 "code": -32602, 00:13:11.647 "message": "Invalid MN SPDK_Controller\u001f" 00:13:11.647 }' 00:13:11.647 19:22:09 -- target/invalid.sh@51 -- # [[ request: 00:13:11.647 { 00:13:11.647 "nqn": "nqn.2016-06.io.spdk:cnode15582", 00:13:11.647 "model_number": "SPDK_Controller\u001f", 00:13:11.647 "method": "nvmf_create_subsystem", 00:13:11.647 "req_id": 1 00:13:11.647 } 00:13:11.647 Got JSON-RPC error response 00:13:11.647 response: 00:13:11.647 { 00:13:11.647 "code": -32602, 00:13:11.647 "message": "Invalid MN SPDK_Controller\u001f" 00:13:11.647 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:11.647 19:22:09 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:11.647 19:22:09 -- target/invalid.sh@19 -- # local length=21 ll 00:13:11.647 19:22:09 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:11.647 19:22:09 -- target/invalid.sh@21 -- # local chars 00:13:11.647 19:22:09 -- target/invalid.sh@22 -- # local string 00:13:11.647 19:22:09 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:11.647 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.647 19:22:09 -- target/invalid.sh@25 -- # printf %x 107 00:13:11.647 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:11.647 19:22:09 -- target/invalid.sh@25 -- # string+=k 00:13:11.647 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.647 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.647 19:22:09 -- target/invalid.sh@25 -- # printf %x 103 00:13:11.647 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:11.647 19:22:09 -- target/invalid.sh@25 -- # string+=g 00:13:11.647 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.647 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.647 19:22:09 -- target/invalid.sh@25 -- # printf %x 50 00:13:11.647 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=2 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 89 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=Y 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 74 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=J 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 93 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=']' 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 94 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+='^' 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 122 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=z 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 46 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=. 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 91 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+='[' 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 97 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=a 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 107 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=k 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 34 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+='"' 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 118 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=v 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 44 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=, 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 122 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=z 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 97 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=a 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 58 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=: 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 49 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=1 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 108 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=l 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # printf %x 59 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:11.648 19:22:09 -- target/invalid.sh@25 -- # string+=';' 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.648 19:22:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.648 19:22:09 -- target/invalid.sh@28 -- # [[ k == \- ]] 00:13:11.648 19:22:09 -- target/invalid.sh@31 -- # echo 'kg2YJ]^z.[ak"v,za:1l;' 00:13:11.648 19:22:09 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'kg2YJ]^z.[ak"v,za:1l;' nqn.2016-06.io.spdk:cnode19709 00:13:11.907 [2024-11-17 19:22:10.082007] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19709: invalid serial number 'kg2YJ]^z.[ak"v,za:1l;' 00:13:11.907 19:22:10 -- target/invalid.sh@54 -- # out='request: 00:13:11.907 { 00:13:11.907 "nqn": "nqn.2016-06.io.spdk:cnode19709", 00:13:11.907 "serial_number": "kg2YJ]^z.[ak\"v,za:1l;", 00:13:11.907 "method": "nvmf_create_subsystem", 00:13:11.907 "req_id": 1 00:13:11.907 } 00:13:11.907 Got JSON-RPC error response 00:13:11.907 response: 00:13:11.907 { 00:13:11.907 "code": -32602, 00:13:11.907 "message": "Invalid SN kg2YJ]^z.[ak\"v,za:1l;" 00:13:11.907 }' 00:13:11.907 19:22:10 -- target/invalid.sh@55 -- # [[ request: 00:13:11.907 { 00:13:11.907 "nqn": "nqn.2016-06.io.spdk:cnode19709", 00:13:11.907 "serial_number": "kg2YJ]^z.[ak\"v,za:1l;", 00:13:11.907 "method": "nvmf_create_subsystem", 00:13:11.907 "req_id": 1 00:13:11.907 } 00:13:11.907 Got JSON-RPC error response 00:13:11.907 response: 00:13:11.907 { 00:13:11.907 "code": -32602, 00:13:11.907 "message": "Invalid SN kg2YJ]^z.[ak\"v,za:1l;" 00:13:11.907 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:11.907 19:22:10 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:11.907 19:22:10 -- target/invalid.sh@19 -- # local length=41 ll 00:13:11.907 19:22:10 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:11.907 19:22:10 -- target/invalid.sh@21 -- # local chars 00:13:11.907 19:22:10 -- target/invalid.sh@22 -- # local string 00:13:11.907 19:22:10 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:11.907 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.907 19:22:10 -- target/invalid.sh@25 -- # printf %x 111 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=o 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 87 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=W 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 116 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=t 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 46 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=. 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 76 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=L 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 111 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=o 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 120 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=x 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 124 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+='|' 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 72 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=H 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 82 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=R 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 124 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+='|' 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 91 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+='[' 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 126 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+='~' 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 39 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=\' 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 121 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=y 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 109 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=m 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 75 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=K 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 102 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=f 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 105 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=i 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 96 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+='`' 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 102 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=f 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 98 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=b 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 78 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=N 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 55 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=7 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # printf %x 119 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:11.908 19:22:10 -- target/invalid.sh@25 -- # string+=w 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.908 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 40 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+='(' 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 52 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=4 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 101 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=e 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 89 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=Y 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 58 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=: 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 113 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=q 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 57 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=9 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 53 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=5 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 52 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=4 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 114 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=r 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 74 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=J 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 100 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=d 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 79 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=O 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 58 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=: 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.167 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # printf %x 88 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:12.167 19:22:10 -- target/invalid.sh@25 -- # string+=X 00:13:12.168 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.168 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.168 19:22:10 -- target/invalid.sh@25 -- # printf %x 122 00:13:12.168 19:22:10 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:12.168 19:22:10 -- target/invalid.sh@25 -- # string+=z 00:13:12.168 19:22:10 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.168 19:22:10 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.168 19:22:10 -- target/invalid.sh@28 -- # [[ o == \- ]] 00:13:12.168 19:22:10 -- target/invalid.sh@31 -- # echo 'oWt.Lox|HR|[~'\''ymKfi`fbN7w(4eY:q954rJdO:Xz' 00:13:12.168 19:22:10 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'oWt.Lox|HR|[~'\''ymKfi`fbN7w(4eY:q954rJdO:Xz' nqn.2016-06.io.spdk:cnode20599 00:13:12.426 [2024-11-17 19:22:10.471257] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20599: invalid model number 'oWt.Lox|HR|[~'ymKfi`fbN7w(4eY:q954rJdO:Xz' 00:13:12.426 19:22:10 -- target/invalid.sh@58 -- # out='request: 00:13:12.426 { 00:13:12.426 "nqn": "nqn.2016-06.io.spdk:cnode20599", 00:13:12.426 "model_number": "oWt.Lox|HR|[~'\''ymKfi`fbN7w(4eY:q954rJdO:Xz", 00:13:12.426 "method": "nvmf_create_subsystem", 00:13:12.426 "req_id": 1 00:13:12.426 } 00:13:12.426 Got JSON-RPC error response 00:13:12.426 response: 00:13:12.426 { 00:13:12.426 "code": -32602, 00:13:12.426 "message": "Invalid MN oWt.Lox|HR|[~'\''ymKfi`fbN7w(4eY:q954rJdO:Xz" 00:13:12.426 }' 00:13:12.426 19:22:10 -- target/invalid.sh@59 -- # [[ request: 00:13:12.426 { 00:13:12.426 "nqn": "nqn.2016-06.io.spdk:cnode20599", 00:13:12.426 "model_number": "oWt.Lox|HR|[~'ymKfi`fbN7w(4eY:q954rJdO:Xz", 00:13:12.426 "method": "nvmf_create_subsystem", 00:13:12.426 "req_id": 1 00:13:12.426 } 00:13:12.426 Got JSON-RPC error response 00:13:12.426 response: 00:13:12.426 { 00:13:12.426 "code": -32602, 00:13:12.426 "message": "Invalid MN oWt.Lox|HR|[~'ymKfi`fbN7w(4eY:q954rJdO:Xz" 00:13:12.426 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:12.426 19:22:10 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:12.683 [2024-11-17 19:22:10.720182] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.683 19:22:10 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:12.941 19:22:10 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:12.941 19:22:10 -- target/invalid.sh@67 -- # echo '' 00:13:12.941 19:22:10 -- target/invalid.sh@67 -- # head -n 1 00:13:12.941 19:22:10 -- target/invalid.sh@67 -- # IP= 00:13:12.941 19:22:10 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:13.198 [2024-11-17 19:22:11.217937] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:13.198 19:22:11 -- target/invalid.sh@69 -- # out='request: 00:13:13.198 { 00:13:13.198 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:13.198 "listen_address": { 00:13:13.198 "trtype": "tcp", 00:13:13.198 "traddr": "", 00:13:13.198 "trsvcid": "4421" 00:13:13.198 }, 00:13:13.198 "method": "nvmf_subsystem_remove_listener", 00:13:13.198 "req_id": 1 00:13:13.198 } 00:13:13.198 Got JSON-RPC error response 00:13:13.198 response: 00:13:13.198 { 00:13:13.198 "code": -32602, 00:13:13.198 "message": "Invalid parameters" 00:13:13.198 }' 00:13:13.198 19:22:11 -- target/invalid.sh@70 -- # [[ request: 00:13:13.198 { 00:13:13.198 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:13.198 "listen_address": { 00:13:13.198 "trtype": "tcp", 00:13:13.198 "traddr": "", 00:13:13.198 "trsvcid": "4421" 00:13:13.198 }, 00:13:13.198 "method": "nvmf_subsystem_remove_listener", 00:13:13.198 "req_id": 1 00:13:13.198 } 00:13:13.198 Got JSON-RPC error response 00:13:13.198 response: 00:13:13.198 { 00:13:13.198 "code": -32602, 00:13:13.198 "message": "Invalid parameters" 00:13:13.198 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:13.198 19:22:11 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1595 -i 0 00:13:13.457 [2024-11-17 19:22:11.482743] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1595: invalid cntlid range [0-65519] 00:13:13.457 19:22:11 -- target/invalid.sh@73 -- # out='request: 00:13:13.457 { 00:13:13.457 "nqn": "nqn.2016-06.io.spdk:cnode1595", 00:13:13.457 "min_cntlid": 0, 00:13:13.457 "method": "nvmf_create_subsystem", 00:13:13.457 "req_id": 1 00:13:13.457 } 00:13:13.457 Got JSON-RPC error response 00:13:13.457 response: 00:13:13.457 { 00:13:13.457 "code": -32602, 00:13:13.457 "message": "Invalid cntlid range [0-65519]" 00:13:13.457 }' 00:13:13.457 19:22:11 -- target/invalid.sh@74 -- # [[ request: 00:13:13.457 { 00:13:13.457 "nqn": "nqn.2016-06.io.spdk:cnode1595", 00:13:13.457 "min_cntlid": 0, 00:13:13.457 "method": "nvmf_create_subsystem", 00:13:13.457 "req_id": 1 00:13:13.457 } 00:13:13.457 Got JSON-RPC error response 00:13:13.457 response: 00:13:13.457 { 00:13:13.457 "code": -32602, 00:13:13.457 "message": "Invalid cntlid range [0-65519]" 00:13:13.457 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.457 19:22:11 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26818 -i 65520 00:13:13.714 [2024-11-17 19:22:11.735557] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26818: invalid cntlid range [65520-65519] 00:13:13.714 19:22:11 -- target/invalid.sh@75 -- # out='request: 00:13:13.714 { 00:13:13.714 "nqn": "nqn.2016-06.io.spdk:cnode26818", 00:13:13.714 "min_cntlid": 65520, 00:13:13.714 "method": "nvmf_create_subsystem", 00:13:13.714 "req_id": 1 00:13:13.714 } 00:13:13.714 Got JSON-RPC error response 00:13:13.714 response: 00:13:13.714 { 00:13:13.714 "code": -32602, 00:13:13.714 "message": "Invalid cntlid range [65520-65519]" 00:13:13.714 }' 00:13:13.714 19:22:11 -- target/invalid.sh@76 -- # [[ request: 00:13:13.714 { 00:13:13.714 "nqn": "nqn.2016-06.io.spdk:cnode26818", 00:13:13.714 "min_cntlid": 65520, 00:13:13.714 "method": "nvmf_create_subsystem", 00:13:13.714 "req_id": 1 00:13:13.714 } 00:13:13.714 Got JSON-RPC error response 00:13:13.714 response: 00:13:13.714 { 00:13:13.714 "code": -32602, 00:13:13.714 "message": "Invalid cntlid range [65520-65519]" 00:13:13.714 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.714 19:22:11 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22267 -I 0 00:13:13.714 [2024-11-17 19:22:11.980438] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22267: invalid cntlid range [1-0] 00:13:13.973 19:22:12 -- target/invalid.sh@77 -- # out='request: 00:13:13.973 { 00:13:13.973 "nqn": "nqn.2016-06.io.spdk:cnode22267", 00:13:13.973 "max_cntlid": 0, 00:13:13.973 "method": "nvmf_create_subsystem", 00:13:13.973 "req_id": 1 00:13:13.973 } 00:13:13.973 Got JSON-RPC error response 00:13:13.973 response: 00:13:13.973 { 00:13:13.973 "code": -32602, 00:13:13.973 "message": "Invalid cntlid range [1-0]" 00:13:13.973 }' 00:13:13.973 19:22:12 -- target/invalid.sh@78 -- # [[ request: 00:13:13.973 { 00:13:13.973 "nqn": "nqn.2016-06.io.spdk:cnode22267", 00:13:13.973 "max_cntlid": 0, 00:13:13.973 "method": "nvmf_create_subsystem", 00:13:13.973 "req_id": 1 00:13:13.973 } 00:13:13.973 Got JSON-RPC error response 00:13:13.973 response: 00:13:13.973 { 00:13:13.973 "code": -32602, 00:13:13.973 "message": "Invalid cntlid range [1-0]" 00:13:13.973 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.973 19:22:12 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11934 -I 65520 00:13:13.973 [2024-11-17 19:22:12.225275] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11934: invalid cntlid range [1-65520] 00:13:14.232 19:22:12 -- target/invalid.sh@79 -- # out='request: 00:13:14.232 { 00:13:14.232 "nqn": "nqn.2016-06.io.spdk:cnode11934", 00:13:14.232 "max_cntlid": 65520, 00:13:14.232 "method": "nvmf_create_subsystem", 00:13:14.232 "req_id": 1 00:13:14.232 } 00:13:14.232 Got JSON-RPC error response 00:13:14.232 response: 00:13:14.232 { 00:13:14.232 "code": -32602, 00:13:14.232 "message": "Invalid cntlid range [1-65520]" 00:13:14.232 }' 00:13:14.232 19:22:12 -- target/invalid.sh@80 -- # [[ request: 00:13:14.232 { 00:13:14.232 "nqn": "nqn.2016-06.io.spdk:cnode11934", 00:13:14.232 "max_cntlid": 65520, 00:13:14.232 "method": "nvmf_create_subsystem", 00:13:14.232 "req_id": 1 00:13:14.232 } 00:13:14.232 Got JSON-RPC error response 00:13:14.232 response: 00:13:14.232 { 00:13:14.232 "code": -32602, 00:13:14.232 "message": "Invalid cntlid range [1-65520]" 00:13:14.232 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.232 19:22:12 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12943 -i 6 -I 5 00:13:14.232 [2024-11-17 19:22:12.470158] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12943: invalid cntlid range [6-5] 00:13:14.232 19:22:12 -- target/invalid.sh@83 -- # out='request: 00:13:14.232 { 00:13:14.232 "nqn": "nqn.2016-06.io.spdk:cnode12943", 00:13:14.232 "min_cntlid": 6, 00:13:14.232 "max_cntlid": 5, 00:13:14.232 "method": "nvmf_create_subsystem", 00:13:14.232 "req_id": 1 00:13:14.232 } 00:13:14.232 Got JSON-RPC error response 00:13:14.232 response: 00:13:14.232 { 00:13:14.232 "code": -32602, 00:13:14.232 "message": "Invalid cntlid range [6-5]" 00:13:14.232 }' 00:13:14.232 19:22:12 -- target/invalid.sh@84 -- # [[ request: 00:13:14.232 { 00:13:14.232 "nqn": "nqn.2016-06.io.spdk:cnode12943", 00:13:14.232 "min_cntlid": 6, 00:13:14.232 "max_cntlid": 5, 00:13:14.232 "method": "nvmf_create_subsystem", 00:13:14.232 "req_id": 1 00:13:14.232 } 00:13:14.232 Got JSON-RPC error response 00:13:14.232 response: 00:13:14.232 { 00:13:14.232 "code": -32602, 00:13:14.232 "message": "Invalid cntlid range [6-5]" 00:13:14.232 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.232 19:22:12 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:14.490 19:22:12 -- target/invalid.sh@87 -- # out='request: 00:13:14.490 { 00:13:14.490 "name": "foobar", 00:13:14.490 "method": "nvmf_delete_target", 00:13:14.490 "req_id": 1 00:13:14.490 } 00:13:14.490 Got JSON-RPC error response 00:13:14.490 response: 00:13:14.490 { 00:13:14.490 "code": -32602, 00:13:14.490 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:14.490 }' 00:13:14.490 19:22:12 -- target/invalid.sh@88 -- # [[ request: 00:13:14.490 { 00:13:14.490 "name": "foobar", 00:13:14.490 "method": "nvmf_delete_target", 00:13:14.490 "req_id": 1 00:13:14.490 } 00:13:14.490 Got JSON-RPC error response 00:13:14.490 response: 00:13:14.490 { 00:13:14.490 "code": -32602, 00:13:14.490 "message": "The specified target doesn't exist, cannot delete it." 00:13:14.490 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:14.490 19:22:12 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:14.490 19:22:12 -- target/invalid.sh@91 -- # nvmftestfini 00:13:14.490 19:22:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:14.490 19:22:12 -- nvmf/common.sh@116 -- # sync 00:13:14.490 19:22:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:14.490 19:22:12 -- nvmf/common.sh@119 -- # set +e 00:13:14.490 19:22:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:14.490 19:22:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:14.490 rmmod nvme_tcp 00:13:14.490 rmmod nvme_fabrics 00:13:14.490 rmmod nvme_keyring 00:13:14.490 19:22:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:14.490 19:22:12 -- nvmf/common.sh@123 -- # set -e 00:13:14.490 19:22:12 -- nvmf/common.sh@124 -- # return 0 00:13:14.490 19:22:12 -- nvmf/common.sh@477 -- # '[' -n 1149176 ']' 00:13:14.490 19:22:12 -- nvmf/common.sh@478 -- # killprocess 1149176 00:13:14.490 19:22:12 -- common/autotest_common.sh@936 -- # '[' -z 1149176 ']' 00:13:14.490 19:22:12 -- common/autotest_common.sh@940 -- # kill -0 1149176 00:13:14.490 19:22:12 -- common/autotest_common.sh@941 -- # uname 00:13:14.491 19:22:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:14.491 19:22:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1149176 00:13:14.491 19:22:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:14.491 19:22:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:14.491 19:22:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1149176' 00:13:14.491 killing process with pid 1149176 00:13:14.491 19:22:12 -- common/autotest_common.sh@955 -- # kill 1149176 00:13:14.491 19:22:12 -- common/autotest_common.sh@960 -- # wait 1149176 00:13:14.750 19:22:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:14.750 19:22:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:14.750 19:22:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:14.750 19:22:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.750 19:22:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:14.750 19:22:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.750 19:22:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.750 19:22:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.283 19:22:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:17.283 00:13:17.283 real 0m9.462s 00:13:17.283 user 0m23.103s 00:13:17.283 sys 0m2.520s 00:13:17.283 19:22:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:17.283 19:22:14 -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 ************************************ 00:13:17.283 END TEST nvmf_invalid 00:13:17.283 ************************************ 00:13:17.283 19:22:14 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:17.283 19:22:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:17.283 19:22:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.283 19:22:14 -- common/autotest_common.sh@10 -- # set +x 00:13:17.283 ************************************ 00:13:17.283 START TEST nvmf_abort 00:13:17.283 ************************************ 00:13:17.284 19:22:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:17.284 * Looking for test storage... 00:13:17.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.284 19:22:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:17.284 19:22:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:17.284 19:22:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:17.284 19:22:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:17.284 19:22:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:17.284 19:22:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:17.284 19:22:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:17.284 19:22:15 -- scripts/common.sh@335 -- # IFS=.-: 00:13:17.284 19:22:15 -- scripts/common.sh@335 -- # read -ra ver1 00:13:17.284 19:22:15 -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.284 19:22:15 -- scripts/common.sh@336 -- # read -ra ver2 00:13:17.284 19:22:15 -- scripts/common.sh@337 -- # local 'op=<' 00:13:17.284 19:22:15 -- scripts/common.sh@339 -- # ver1_l=2 00:13:17.284 19:22:15 -- scripts/common.sh@340 -- # ver2_l=1 00:13:17.284 19:22:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:17.284 19:22:15 -- scripts/common.sh@343 -- # case "$op" in 00:13:17.284 19:22:15 -- scripts/common.sh@344 -- # : 1 00:13:17.284 19:22:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:17.284 19:22:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.284 19:22:15 -- scripts/common.sh@364 -- # decimal 1 00:13:17.284 19:22:15 -- scripts/common.sh@352 -- # local d=1 00:13:17.284 19:22:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.284 19:22:15 -- scripts/common.sh@354 -- # echo 1 00:13:17.284 19:22:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:17.284 19:22:15 -- scripts/common.sh@365 -- # decimal 2 00:13:17.284 19:22:15 -- scripts/common.sh@352 -- # local d=2 00:13:17.284 19:22:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.284 19:22:15 -- scripts/common.sh@354 -- # echo 2 00:13:17.284 19:22:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:17.284 19:22:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:17.284 19:22:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:17.284 19:22:15 -- scripts/common.sh@367 -- # return 0 00:13:17.284 19:22:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.284 19:22:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:17.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.284 --rc genhtml_branch_coverage=1 00:13:17.284 --rc genhtml_function_coverage=1 00:13:17.284 --rc genhtml_legend=1 00:13:17.284 --rc geninfo_all_blocks=1 00:13:17.284 --rc geninfo_unexecuted_blocks=1 00:13:17.284 00:13:17.284 ' 00:13:17.284 19:22:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:17.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.284 --rc genhtml_branch_coverage=1 00:13:17.284 --rc genhtml_function_coverage=1 00:13:17.284 --rc genhtml_legend=1 00:13:17.284 --rc geninfo_all_blocks=1 00:13:17.284 --rc geninfo_unexecuted_blocks=1 00:13:17.284 00:13:17.284 ' 00:13:17.284 19:22:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:17.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.284 --rc genhtml_branch_coverage=1 00:13:17.284 --rc genhtml_function_coverage=1 00:13:17.284 --rc genhtml_legend=1 00:13:17.284 --rc geninfo_all_blocks=1 00:13:17.284 --rc geninfo_unexecuted_blocks=1 00:13:17.284 00:13:17.284 ' 00:13:17.284 19:22:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:17.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.284 --rc genhtml_branch_coverage=1 00:13:17.284 --rc genhtml_function_coverage=1 00:13:17.284 --rc genhtml_legend=1 00:13:17.284 --rc geninfo_all_blocks=1 00:13:17.284 --rc geninfo_unexecuted_blocks=1 00:13:17.284 00:13:17.284 ' 00:13:17.284 19:22:15 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.284 19:22:15 -- nvmf/common.sh@7 -- # uname -s 00:13:17.284 19:22:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.284 19:22:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.284 19:22:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.284 19:22:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.284 19:22:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.284 19:22:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.284 19:22:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.284 19:22:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.284 19:22:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.284 19:22:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.284 19:22:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.284 19:22:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.284 19:22:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.284 19:22:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.284 19:22:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.284 19:22:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.284 19:22:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.284 19:22:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.284 19:22:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.284 19:22:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.284 19:22:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.284 19:22:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.284 19:22:15 -- paths/export.sh@5 -- # export PATH 00:13:17.284 19:22:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.284 19:22:15 -- nvmf/common.sh@46 -- # : 0 00:13:17.284 19:22:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:17.284 19:22:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:17.284 19:22:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:17.284 19:22:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.284 19:22:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.284 19:22:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:17.284 19:22:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:17.284 19:22:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:17.284 19:22:15 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.284 19:22:15 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:17.284 19:22:15 -- target/abort.sh@14 -- # nvmftestinit 00:13:17.284 19:22:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:17.284 19:22:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.284 19:22:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:17.284 19:22:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:17.284 19:22:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:17.284 19:22:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.284 19:22:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.284 19:22:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.284 19:22:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:17.284 19:22:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:17.284 19:22:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:17.284 19:22:15 -- common/autotest_common.sh@10 -- # set +x 00:13:19.189 19:22:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:19.189 19:22:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:19.189 19:22:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:19.189 19:22:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:19.189 19:22:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:19.189 19:22:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:19.189 19:22:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:19.189 19:22:17 -- nvmf/common.sh@294 -- # net_devs=() 00:13:19.189 19:22:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:19.189 19:22:17 -- nvmf/common.sh@295 -- # e810=() 00:13:19.189 19:22:17 -- nvmf/common.sh@295 -- # local -ga e810 00:13:19.189 19:22:17 -- nvmf/common.sh@296 -- # x722=() 00:13:19.189 19:22:17 -- nvmf/common.sh@296 -- # local -ga x722 00:13:19.189 19:22:17 -- nvmf/common.sh@297 -- # mlx=() 00:13:19.189 19:22:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:19.189 19:22:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.189 19:22:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:19.189 19:22:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:19.189 19:22:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:19.189 19:22:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:19.189 19:22:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:19.189 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:19.189 19:22:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:19.189 19:22:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:19.189 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:19.189 19:22:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:19.189 19:22:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:19.189 19:22:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.189 19:22:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:19.189 19:22:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.189 19:22:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:19.189 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:19.189 19:22:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.189 19:22:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:19.189 19:22:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.189 19:22:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:19.189 19:22:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.189 19:22:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:19.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:19.189 19:22:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.189 19:22:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:19.189 19:22:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:19.189 19:22:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:19.189 19:22:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:19.189 19:22:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.189 19:22:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.189 19:22:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.189 19:22:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:19.189 19:22:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.189 19:22:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.189 19:22:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:19.189 19:22:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.189 19:22:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.189 19:22:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:19.189 19:22:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:19.189 19:22:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.189 19:22:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.189 19:22:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.189 19:22:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.189 19:22:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:19.189 19:22:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.189 19:22:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.189 19:22:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.189 19:22:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:19.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:13:19.189 00:13:19.189 --- 10.0.0.2 ping statistics --- 00:13:19.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.189 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:13:19.189 19:22:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:19.189 00:13:19.189 --- 10.0.0.1 ping statistics --- 00:13:19.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.189 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:19.189 19:22:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.189 19:22:17 -- nvmf/common.sh@410 -- # return 0 00:13:19.190 19:22:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:19.190 19:22:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.190 19:22:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:19.190 19:22:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:19.190 19:22:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.190 19:22:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:19.190 19:22:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:19.190 19:22:17 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:19.190 19:22:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:19.190 19:22:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:19.190 19:22:17 -- common/autotest_common.sh@10 -- # set +x 00:13:19.190 19:22:17 -- nvmf/common.sh@469 -- # nvmfpid=1151973 00:13:19.190 19:22:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:19.190 19:22:17 -- nvmf/common.sh@470 -- # waitforlisten 1151973 00:13:19.190 19:22:17 -- common/autotest_common.sh@829 -- # '[' -z 1151973 ']' 00:13:19.190 19:22:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.190 19:22:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.190 19:22:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.190 19:22:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.190 19:22:17 -- common/autotest_common.sh@10 -- # set +x 00:13:19.190 [2024-11-17 19:22:17.287177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:19.190 [2024-11-17 19:22:17.287248] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.190 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.190 [2024-11-17 19:22:17.358360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:19.449 [2024-11-17 19:22:17.458447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.449 [2024-11-17 19:22:17.458622] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.449 [2024-11-17 19:22:17.458641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.449 [2024-11-17 19:22:17.458656] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.449 [2024-11-17 19:22:17.458757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.449 [2024-11-17 19:22:17.458814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.449 [2024-11-17 19:22:17.458818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.387 19:22:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.387 19:22:18 -- common/autotest_common.sh@862 -- # return 0 00:13:20.387 19:22:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:20.387 19:22:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:20.387 19:22:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.387 19:22:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.387 19:22:18 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:20.387 19:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.387 19:22:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.387 [2024-11-17 19:22:18.347327] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.387 19:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.387 19:22:18 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:20.387 19:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.387 19:22:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.387 Malloc0 00:13:20.387 19:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.387 19:22:18 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:20.387 19:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.387 19:22:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.387 Delay0 00:13:20.387 19:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.387 19:22:18 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:20.387 19:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.387 19:22:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.387 19:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.387 19:22:18 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:20.387 19:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.387 19:22:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.387 19:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.387 19:22:18 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:20.387 19:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.387 19:22:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.387 [2024-11-17 19:22:18.412994] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.387 19:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.387 19:22:18 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:20.387 19:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.387 19:22:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.387 19:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.387 19:22:18 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:20.387 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.387 [2024-11-17 19:22:18.559797] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:23.011 Initializing NVMe Controllers 00:13:23.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:23.011 controller IO queue size 128 less than required 00:13:23.011 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:23.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:23.011 Initialization complete. Launching workers. 00:13:23.011 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32402 00:13:23.011 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32463, failed to submit 62 00:13:23.011 success 32402, unsuccess 61, failed 0 00:13:23.011 19:22:20 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:23.011 19:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.011 19:22:20 -- common/autotest_common.sh@10 -- # set +x 00:13:23.011 19:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.011 19:22:20 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:23.011 19:22:20 -- target/abort.sh@38 -- # nvmftestfini 00:13:23.011 19:22:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:23.011 19:22:20 -- nvmf/common.sh@116 -- # sync 00:13:23.011 19:22:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:23.011 19:22:20 -- nvmf/common.sh@119 -- # set +e 00:13:23.011 19:22:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:23.011 19:22:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:23.011 rmmod nvme_tcp 00:13:23.011 rmmod nvme_fabrics 00:13:23.011 rmmod nvme_keyring 00:13:23.011 19:22:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:23.011 19:22:20 -- nvmf/common.sh@123 -- # set -e 00:13:23.011 19:22:20 -- nvmf/common.sh@124 -- # return 0 00:13:23.011 19:22:20 -- nvmf/common.sh@477 -- # '[' -n 1151973 ']' 00:13:23.011 19:22:20 -- nvmf/common.sh@478 -- # killprocess 1151973 00:13:23.011 19:22:20 -- common/autotest_common.sh@936 -- # '[' -z 1151973 ']' 00:13:23.011 19:22:20 -- common/autotest_common.sh@940 -- # kill -0 1151973 00:13:23.011 19:22:20 -- common/autotest_common.sh@941 -- # uname 00:13:23.011 19:22:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.011 19:22:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1151973 00:13:23.011 19:22:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:23.011 19:22:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:23.011 19:22:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1151973' 00:13:23.011 killing process with pid 1151973 00:13:23.011 19:22:20 -- common/autotest_common.sh@955 -- # kill 1151973 00:13:23.011 19:22:20 -- common/autotest_common.sh@960 -- # wait 1151973 00:13:23.011 19:22:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:23.011 19:22:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:23.011 19:22:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:23.012 19:22:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.012 19:22:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:23.012 19:22:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.012 19:22:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.012 19:22:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.917 19:22:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:24.917 00:13:24.917 real 0m8.112s 00:13:24.917 user 0m13.495s 00:13:24.917 sys 0m2.508s 00:13:24.917 19:22:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:24.917 19:22:23 -- common/autotest_common.sh@10 -- # set +x 00:13:24.917 ************************************ 00:13:24.917 END TEST nvmf_abort 00:13:24.917 ************************************ 00:13:24.917 19:22:23 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:24.917 19:22:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:24.917 19:22:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.917 19:22:23 -- common/autotest_common.sh@10 -- # set +x 00:13:24.917 ************************************ 00:13:24.917 START TEST nvmf_ns_hotplug_stress 00:13:24.917 ************************************ 00:13:24.917 19:22:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:24.917 * Looking for test storage... 00:13:25.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.176 19:22:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:25.176 19:22:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:25.176 19:22:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:25.176 19:22:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:25.176 19:22:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:25.176 19:22:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:25.176 19:22:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:25.176 19:22:23 -- scripts/common.sh@335 -- # IFS=.-: 00:13:25.176 19:22:23 -- scripts/common.sh@335 -- # read -ra ver1 00:13:25.176 19:22:23 -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.176 19:22:23 -- scripts/common.sh@336 -- # read -ra ver2 00:13:25.176 19:22:23 -- scripts/common.sh@337 -- # local 'op=<' 00:13:25.176 19:22:23 -- scripts/common.sh@339 -- # ver1_l=2 00:13:25.176 19:22:23 -- scripts/common.sh@340 -- # ver2_l=1 00:13:25.176 19:22:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:25.176 19:22:23 -- scripts/common.sh@343 -- # case "$op" in 00:13:25.176 19:22:23 -- scripts/common.sh@344 -- # : 1 00:13:25.176 19:22:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:25.176 19:22:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.176 19:22:23 -- scripts/common.sh@364 -- # decimal 1 00:13:25.176 19:22:23 -- scripts/common.sh@352 -- # local d=1 00:13:25.176 19:22:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.176 19:22:23 -- scripts/common.sh@354 -- # echo 1 00:13:25.176 19:22:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:25.176 19:22:23 -- scripts/common.sh@365 -- # decimal 2 00:13:25.176 19:22:23 -- scripts/common.sh@352 -- # local d=2 00:13:25.176 19:22:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.176 19:22:23 -- scripts/common.sh@354 -- # echo 2 00:13:25.176 19:22:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:25.177 19:22:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:25.177 19:22:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:25.177 19:22:23 -- scripts/common.sh@367 -- # return 0 00:13:25.177 19:22:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.177 19:22:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:25.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.177 --rc genhtml_branch_coverage=1 00:13:25.177 --rc genhtml_function_coverage=1 00:13:25.177 --rc genhtml_legend=1 00:13:25.177 --rc geninfo_all_blocks=1 00:13:25.177 --rc geninfo_unexecuted_blocks=1 00:13:25.177 00:13:25.177 ' 00:13:25.177 19:22:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:25.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.177 --rc genhtml_branch_coverage=1 00:13:25.177 --rc genhtml_function_coverage=1 00:13:25.177 --rc genhtml_legend=1 00:13:25.177 --rc geninfo_all_blocks=1 00:13:25.177 --rc geninfo_unexecuted_blocks=1 00:13:25.177 00:13:25.177 ' 00:13:25.177 19:22:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:25.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.177 --rc genhtml_branch_coverage=1 00:13:25.177 --rc genhtml_function_coverage=1 00:13:25.177 --rc genhtml_legend=1 00:13:25.177 --rc geninfo_all_blocks=1 00:13:25.177 --rc geninfo_unexecuted_blocks=1 00:13:25.177 00:13:25.177 ' 00:13:25.177 19:22:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:25.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.177 --rc genhtml_branch_coverage=1 00:13:25.177 --rc genhtml_function_coverage=1 00:13:25.177 --rc genhtml_legend=1 00:13:25.177 --rc geninfo_all_blocks=1 00:13:25.177 --rc geninfo_unexecuted_blocks=1 00:13:25.177 00:13:25.177 ' 00:13:25.177 19:22:23 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.177 19:22:23 -- nvmf/common.sh@7 -- # uname -s 00:13:25.177 19:22:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.177 19:22:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.177 19:22:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.177 19:22:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.177 19:22:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.177 19:22:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.177 19:22:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.177 19:22:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.177 19:22:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.177 19:22:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.177 19:22:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.177 19:22:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.177 19:22:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.177 19:22:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.177 19:22:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.177 19:22:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.177 19:22:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.177 19:22:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.177 19:22:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.177 19:22:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.177 19:22:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.177 19:22:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.177 19:22:23 -- paths/export.sh@5 -- # export PATH 00:13:25.177 19:22:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.177 19:22:23 -- nvmf/common.sh@46 -- # : 0 00:13:25.177 19:22:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:25.177 19:22:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:25.177 19:22:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:25.177 19:22:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.177 19:22:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.177 19:22:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:25.177 19:22:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:25.177 19:22:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:25.177 19:22:23 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.177 19:22:23 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:25.177 19:22:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:25.177 19:22:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.177 19:22:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:25.177 19:22:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:25.177 19:22:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:25.177 19:22:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.177 19:22:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.177 19:22:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.177 19:22:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:25.177 19:22:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:25.177 19:22:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:25.177 19:22:23 -- common/autotest_common.sh@10 -- # set +x 00:13:27.081 19:22:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:27.081 19:22:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:27.081 19:22:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:27.081 19:22:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:27.081 19:22:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:27.081 19:22:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:27.081 19:22:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:27.081 19:22:25 -- nvmf/common.sh@294 -- # net_devs=() 00:13:27.081 19:22:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:27.081 19:22:25 -- nvmf/common.sh@295 -- # e810=() 00:13:27.081 19:22:25 -- nvmf/common.sh@295 -- # local -ga e810 00:13:27.081 19:22:25 -- nvmf/common.sh@296 -- # x722=() 00:13:27.081 19:22:25 -- nvmf/common.sh@296 -- # local -ga x722 00:13:27.081 19:22:25 -- nvmf/common.sh@297 -- # mlx=() 00:13:27.081 19:22:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:27.081 19:22:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.081 19:22:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:27.081 19:22:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:27.081 19:22:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:27.081 19:22:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:27.081 19:22:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:27.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:27.081 19:22:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:27.081 19:22:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:27.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:27.081 19:22:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:27.081 19:22:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:27.081 19:22:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.081 19:22:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:27.081 19:22:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.081 19:22:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:27.081 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:27.081 19:22:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.081 19:22:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:27.081 19:22:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.081 19:22:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:27.081 19:22:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.081 19:22:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:27.081 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:27.081 19:22:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.081 19:22:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:27.081 19:22:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:27.081 19:22:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:27.081 19:22:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:27.081 19:22:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.081 19:22:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.081 19:22:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.081 19:22:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:27.081 19:22:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.081 19:22:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.081 19:22:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:27.081 19:22:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.081 19:22:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.081 19:22:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:27.081 19:22:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:27.081 19:22:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.081 19:22:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.081 19:22:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.081 19:22:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.081 19:22:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:27.081 19:22:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.339 19:22:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.339 19:22:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.339 19:22:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:27.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:13:27.339 00:13:27.339 --- 10.0.0.2 ping statistics --- 00:13:27.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.339 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:13:27.339 19:22:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:13:27.339 00:13:27.339 --- 10.0.0.1 ping statistics --- 00:13:27.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.339 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:27.339 19:22:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.339 19:22:25 -- nvmf/common.sh@410 -- # return 0 00:13:27.339 19:22:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:27.339 19:22:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.339 19:22:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:27.339 19:22:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:27.339 19:22:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.339 19:22:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:27.339 19:22:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:27.339 19:22:25 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:27.339 19:22:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:27.339 19:22:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:27.340 19:22:25 -- common/autotest_common.sh@10 -- # set +x 00:13:27.340 19:22:25 -- nvmf/common.sh@469 -- # nvmfpid=1154361 00:13:27.340 19:22:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:27.340 19:22:25 -- nvmf/common.sh@470 -- # waitforlisten 1154361 00:13:27.340 19:22:25 -- common/autotest_common.sh@829 -- # '[' -z 1154361 ']' 00:13:27.340 19:22:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.340 19:22:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.340 19:22:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.340 19:22:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.340 19:22:25 -- common/autotest_common.sh@10 -- # set +x 00:13:27.340 [2024-11-17 19:22:25.465520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:27.340 [2024-11-17 19:22:25.465595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.340 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.340 [2024-11-17 19:22:25.528608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:27.599 [2024-11-17 19:22:25.612566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:27.599 [2024-11-17 19:22:25.612753] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.599 [2024-11-17 19:22:25.612774] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.599 [2024-11-17 19:22:25.612787] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.599 [2024-11-17 19:22:25.612875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.599 [2024-11-17 19:22:25.612931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.599 [2024-11-17 19:22:25.612933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.533 19:22:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.533 19:22:26 -- common/autotest_common.sh@862 -- # return 0 00:13:28.533 19:22:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:28.533 19:22:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:28.533 19:22:26 -- common/autotest_common.sh@10 -- # set +x 00:13:28.533 19:22:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.533 19:22:26 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:28.533 19:22:26 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:28.533 [2024-11-17 19:22:26.711820] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.533 19:22:26 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:28.790 19:22:26 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.048 [2024-11-17 19:22:27.206721] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.048 19:22:27 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:29.306 19:22:27 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:29.563 Malloc0 00:13:29.563 19:22:27 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:29.821 Delay0 00:13:29.821 19:22:28 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.078 19:22:28 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:30.335 NULL1 00:13:30.336 19:22:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:30.593 19:22:28 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1154797 00:13:30.593 19:22:28 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:30.593 19:22:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:30.593 19:22:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.593 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.851 19:22:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.108 19:22:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:31.108 19:22:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:31.364 true 00:13:31.364 19:22:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:31.364 19:22:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.621 19:22:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.879 19:22:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:31.879 19:22:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:32.136 true 00:13:32.136 19:22:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:32.136 19:22:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.394 19:22:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.652 19:22:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:32.652 19:22:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:32.908 true 00:13:32.908 19:22:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:32.908 19:22:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.283 Read completed with error (sct=0, sc=11) 00:13:34.283 19:22:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.283 19:22:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:34.283 19:22:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:34.540 true 00:13:34.540 19:22:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:34.540 19:22:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.798 19:22:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.055 19:22:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:35.055 19:22:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:35.313 true 00:13:35.313 19:22:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:35.313 19:22:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.570 19:22:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.828 19:22:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:35.828 19:22:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:36.085 true 00:13:36.085 19:22:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:36.085 19:22:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.020 19:22:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.277 19:22:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:37.277 19:22:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:37.535 true 00:13:37.535 19:22:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:37.535 19:22:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.792 19:22:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.050 19:22:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:38.050 19:22:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:38.308 true 00:13:38.308 19:22:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:38.308 19:22:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.245 19:22:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.503 19:22:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:39.503 19:22:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:39.761 true 00:13:39.761 19:22:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:39.761 19:22:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.019 19:22:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.309 19:22:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:40.309 19:22:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:40.623 true 00:13:40.623 19:22:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:40.623 19:22:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.882 19:22:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.882 19:22:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:40.882 19:22:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:41.142 true 00:13:41.142 19:22:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:41.142 19:22:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.524 19:22:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.524 19:22:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:42.524 19:22:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:42.782 true 00:13:42.782 19:22:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:42.782 19:22:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.039 19:22:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.297 19:22:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:43.297 19:22:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:43.555 true 00:13:43.555 19:22:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:43.555 19:22:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.812 19:22:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.070 19:22:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:44.070 19:22:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:44.327 true 00:13:44.327 19:22:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:44.327 19:22:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.700 19:22:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.700 19:22:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:45.700 19:22:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:45.958 true 00:13:45.958 19:22:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:45.958 19:22:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.216 19:22:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.474 19:22:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:46.474 19:22:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:46.731 true 00:13:46.731 19:22:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:46.731 19:22:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.668 19:22:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.926 19:22:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:47.926 19:22:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:48.184 true 00:13:48.184 19:22:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:48.184 19:22:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.441 19:22:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.699 19:22:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:48.699 19:22:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:48.957 true 00:13:48.957 19:22:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:48.957 19:22:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.894 19:22:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.152 19:22:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:50.152 19:22:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:50.410 true 00:13:50.410 19:22:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:50.410 19:22:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.668 19:22:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.925 19:22:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:50.925 19:22:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:51.183 true 00:13:51.183 19:22:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:51.183 19:22:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.441 19:22:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.699 19:22:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:51.699 19:22:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:51.956 true 00:13:51.956 19:22:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:51.956 19:22:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.892 19:22:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.150 19:22:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:53.150 19:22:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:53.407 true 00:13:53.407 19:22:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:53.407 19:22:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.344 19:22:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.602 19:22:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:54.602 19:22:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:54.602 true 00:13:54.860 19:22:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:54.860 19:22:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.118 19:22:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.377 19:22:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:55.377 19:22:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:55.377 true 00:13:55.637 19:22:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:55.637 19:22:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.204 19:22:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.461 19:22:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:56.461 19:22:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:56.718 true 00:13:56.718 19:22:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:56.718 19:22:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.976 19:22:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.234 19:22:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:57.234 19:22:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:57.492 true 00:13:57.492 19:22:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:57.492 19:22:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.750 19:22:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.008 19:22:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:58.008 19:22:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:58.266 true 00:13:58.266 19:22:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:58.266 19:22:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.643 19:22:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.643 19:22:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:59.643 19:22:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:59.900 true 00:13:59.900 19:22:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:13:59.900 19:22:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.158 19:22:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.416 19:22:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:00.416 19:22:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:00.674 true 00:14:00.674 19:22:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:14:00.674 19:22:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.611 Initializing NVMe Controllers 00:14:01.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.611 Controller IO queue size 128, less than required. 00:14:01.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.611 Controller IO queue size 128, less than required. 00:14:01.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:01.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:01.611 Initialization complete. Launching workers. 00:14:01.611 ======================================================== 00:14:01.611 Latency(us) 00:14:01.611 Device Information : IOPS MiB/s Average min max 00:14:01.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 694.63 0.34 82216.66 2888.04 1025349.06 00:14:01.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10097.83 4.93 12638.80 1550.25 441058.91 00:14:01.611 ======================================================== 00:14:01.611 Total : 10792.47 5.27 17117.03 1550.25 1025349.06 00:14:01.611 00:14:01.611 19:22:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.869 19:22:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:01.869 19:22:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:02.128 true 00:14:02.128 19:23:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1154797 00:14:02.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1154797) - No such process 00:14:02.128 19:23:00 -- target/ns_hotplug_stress.sh@53 -- # wait 1154797 00:14:02.128 19:23:00 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.389 19:23:00 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.649 19:23:00 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:02.649 19:23:00 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:02.649 19:23:00 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:02.649 19:23:00 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:02.649 19:23:00 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:02.649 null0 00:14:02.908 19:23:00 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:02.908 19:23:00 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:02.908 19:23:00 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:02.908 null1 00:14:03.168 19:23:01 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.168 19:23:01 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.168 19:23:01 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:03.168 null2 00:14:03.168 19:23:01 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.168 19:23:01 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.168 19:23:01 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:03.426 null3 00:14:03.426 19:23:01 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.426 19:23:01 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.426 19:23:01 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:03.685 null4 00:14:03.685 19:23:01 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.685 19:23:01 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.685 19:23:01 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:03.943 null5 00:14:03.943 19:23:02 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.943 19:23:02 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.943 19:23:02 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:04.201 null6 00:14:04.201 19:23:02 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:04.201 19:23:02 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:04.201 19:23:02 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:04.460 null7 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.460 19:23:02 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@66 -- # wait 1158970 1158971 1158973 1158975 1158977 1158979 1158981 1158983 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.461 19:23:02 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.720 19:23:02 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.720 19:23:02 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.720 19:23:02 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.720 19:23:02 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.720 19:23:02 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.720 19:23:02 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.720 19:23:02 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.720 19:23:02 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.979 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.237 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:05.496 19:23:03 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.496 19:23:03 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.496 19:23:03 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.496 19:23:03 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:05.496 19:23:03 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.496 19:23:03 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.496 19:23:03 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.496 19:23:03 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.754 19:23:03 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.012 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.012 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.012 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:06.012 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.012 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:06.012 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.012 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.013 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.271 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.529 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.529 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:06.529 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.529 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.529 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:06.529 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.529 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.529 19:23:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.788 19:23:04 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.045 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.045 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.045 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.045 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.045 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.045 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.045 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.045 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.303 19:23:05 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:07.561 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.561 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.561 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.561 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.561 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.561 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.561 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.561 19:23:05 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.819 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.077 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.077 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.077 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.077 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.077 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.077 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.077 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.334 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.334 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.334 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.334 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.592 19:23:06 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.851 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.851 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.851 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.851 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.851 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.851 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.851 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.851 19:23:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.109 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.367 19:23:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.367 19:23:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.367 19:23:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.367 19:23:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.367 19:23:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.367 19:23:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.367 19:23:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.367 19:23:07 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.626 19:23:07 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.884 19:23:08 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.884 19:23:08 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.884 19:23:08 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.884 19:23:08 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.884 19:23:08 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.884 19:23:08 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.884 19:23:08 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.884 19:23:08 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:10.142 19:23:08 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:10.142 19:23:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:10.142 19:23:08 -- nvmf/common.sh@116 -- # sync 00:14:10.142 19:23:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:10.142 19:23:08 -- nvmf/common.sh@119 -- # set +e 00:14:10.142 19:23:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:10.142 19:23:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:10.142 rmmod nvme_tcp 00:14:10.142 rmmod nvme_fabrics 00:14:10.142 rmmod nvme_keyring 00:14:10.142 19:23:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:10.142 19:23:08 -- nvmf/common.sh@123 -- # set -e 00:14:10.142 19:23:08 -- nvmf/common.sh@124 -- # return 0 00:14:10.142 19:23:08 -- nvmf/common.sh@477 -- # '[' -n 1154361 ']' 00:14:10.142 19:23:08 -- nvmf/common.sh@478 -- # killprocess 1154361 00:14:10.142 19:23:08 -- common/autotest_common.sh@936 -- # '[' -z 1154361 ']' 00:14:10.142 19:23:08 -- common/autotest_common.sh@940 -- # kill -0 1154361 00:14:10.142 19:23:08 -- common/autotest_common.sh@941 -- # uname 00:14:10.142 19:23:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.142 19:23:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1154361 00:14:10.142 19:23:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:10.142 19:23:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:10.142 19:23:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1154361' 00:14:10.142 killing process with pid 1154361 00:14:10.142 19:23:08 -- common/autotest_common.sh@955 -- # kill 1154361 00:14:10.142 19:23:08 -- common/autotest_common.sh@960 -- # wait 1154361 00:14:10.400 19:23:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:10.401 19:23:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:10.401 19:23:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:10.401 19:23:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.401 19:23:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:10.401 19:23:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.401 19:23:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.401 19:23:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.932 19:23:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:12.932 00:14:12.932 real 0m47.539s 00:14:12.932 user 3m38.494s 00:14:12.932 sys 0m15.091s 00:14:12.932 19:23:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:12.932 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:14:12.932 ************************************ 00:14:12.932 END TEST nvmf_ns_hotplug_stress 00:14:12.932 ************************************ 00:14:12.932 19:23:10 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:12.932 19:23:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:12.932 19:23:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.932 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:14:12.932 ************************************ 00:14:12.932 START TEST nvmf_connect_stress 00:14:12.932 ************************************ 00:14:12.932 19:23:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:12.932 * Looking for test storage... 00:14:12.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.932 19:23:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:12.932 19:23:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:12.932 19:23:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:12.932 19:23:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:12.932 19:23:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:12.932 19:23:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:12.932 19:23:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:12.932 19:23:10 -- scripts/common.sh@335 -- # IFS=.-: 00:14:12.932 19:23:10 -- scripts/common.sh@335 -- # read -ra ver1 00:14:12.932 19:23:10 -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.932 19:23:10 -- scripts/common.sh@336 -- # read -ra ver2 00:14:12.932 19:23:10 -- scripts/common.sh@337 -- # local 'op=<' 00:14:12.932 19:23:10 -- scripts/common.sh@339 -- # ver1_l=2 00:14:12.932 19:23:10 -- scripts/common.sh@340 -- # ver2_l=1 00:14:12.932 19:23:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:12.932 19:23:10 -- scripts/common.sh@343 -- # case "$op" in 00:14:12.932 19:23:10 -- scripts/common.sh@344 -- # : 1 00:14:12.932 19:23:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:12.932 19:23:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.932 19:23:10 -- scripts/common.sh@364 -- # decimal 1 00:14:12.932 19:23:10 -- scripts/common.sh@352 -- # local d=1 00:14:12.932 19:23:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.932 19:23:10 -- scripts/common.sh@354 -- # echo 1 00:14:12.932 19:23:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:12.932 19:23:10 -- scripts/common.sh@365 -- # decimal 2 00:14:12.932 19:23:10 -- scripts/common.sh@352 -- # local d=2 00:14:12.932 19:23:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.932 19:23:10 -- scripts/common.sh@354 -- # echo 2 00:14:12.932 19:23:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:12.932 19:23:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:12.932 19:23:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:12.932 19:23:10 -- scripts/common.sh@367 -- # return 0 00:14:12.932 19:23:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.932 19:23:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:12.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.932 --rc genhtml_branch_coverage=1 00:14:12.932 --rc genhtml_function_coverage=1 00:14:12.932 --rc genhtml_legend=1 00:14:12.932 --rc geninfo_all_blocks=1 00:14:12.932 --rc geninfo_unexecuted_blocks=1 00:14:12.932 00:14:12.932 ' 00:14:12.932 19:23:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:12.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.932 --rc genhtml_branch_coverage=1 00:14:12.932 --rc genhtml_function_coverage=1 00:14:12.932 --rc genhtml_legend=1 00:14:12.932 --rc geninfo_all_blocks=1 00:14:12.932 --rc geninfo_unexecuted_blocks=1 00:14:12.932 00:14:12.932 ' 00:14:12.932 19:23:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:12.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.932 --rc genhtml_branch_coverage=1 00:14:12.932 --rc genhtml_function_coverage=1 00:14:12.932 --rc genhtml_legend=1 00:14:12.932 --rc geninfo_all_blocks=1 00:14:12.932 --rc geninfo_unexecuted_blocks=1 00:14:12.932 00:14:12.932 ' 00:14:12.932 19:23:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:12.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.932 --rc genhtml_branch_coverage=1 00:14:12.932 --rc genhtml_function_coverage=1 00:14:12.932 --rc genhtml_legend=1 00:14:12.932 --rc geninfo_all_blocks=1 00:14:12.932 --rc geninfo_unexecuted_blocks=1 00:14:12.932 00:14:12.932 ' 00:14:12.932 19:23:10 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.932 19:23:10 -- nvmf/common.sh@7 -- # uname -s 00:14:12.932 19:23:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.932 19:23:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.932 19:23:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.932 19:23:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.932 19:23:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.932 19:23:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.932 19:23:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.932 19:23:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.932 19:23:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.932 19:23:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.932 19:23:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.932 19:23:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.932 19:23:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.932 19:23:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.933 19:23:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.933 19:23:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.933 19:23:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.933 19:23:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.933 19:23:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.933 19:23:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.933 19:23:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.933 19:23:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.933 19:23:10 -- paths/export.sh@5 -- # export PATH 00:14:12.933 19:23:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.933 19:23:10 -- nvmf/common.sh@46 -- # : 0 00:14:12.933 19:23:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:12.933 19:23:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:12.933 19:23:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:12.933 19:23:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.933 19:23:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.933 19:23:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:12.933 19:23:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:12.933 19:23:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:12.933 19:23:10 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:12.933 19:23:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:12.933 19:23:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.933 19:23:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:12.933 19:23:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:12.933 19:23:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:12.933 19:23:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.933 19:23:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.933 19:23:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.933 19:23:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:12.933 19:23:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:12.933 19:23:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:12.933 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:14:14.833 19:23:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:14.833 19:23:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:14.833 19:23:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:14.833 19:23:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:14.833 19:23:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:14.833 19:23:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:14.833 19:23:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:14.833 19:23:13 -- nvmf/common.sh@294 -- # net_devs=() 00:14:14.833 19:23:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:14.833 19:23:13 -- nvmf/common.sh@295 -- # e810=() 00:14:14.833 19:23:13 -- nvmf/common.sh@295 -- # local -ga e810 00:14:14.833 19:23:13 -- nvmf/common.sh@296 -- # x722=() 00:14:14.833 19:23:13 -- nvmf/common.sh@296 -- # local -ga x722 00:14:14.833 19:23:13 -- nvmf/common.sh@297 -- # mlx=() 00:14:14.833 19:23:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:14.833 19:23:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.833 19:23:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:14.833 19:23:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:14.833 19:23:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:14.833 19:23:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:14.833 19:23:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:14.833 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:14.833 19:23:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:14.833 19:23:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:14.833 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:14.833 19:23:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:14.833 19:23:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:14.833 19:23:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.833 19:23:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:14.833 19:23:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.833 19:23:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:14.833 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:14.833 19:23:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.833 19:23:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:14.833 19:23:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.833 19:23:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:14.833 19:23:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.833 19:23:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:14.833 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:14.833 19:23:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.833 19:23:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:14.833 19:23:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:14.833 19:23:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:14.833 19:23:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:14.834 19:23:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.834 19:23:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.834 19:23:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.834 19:23:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:14.834 19:23:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.834 19:23:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.834 19:23:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:14.834 19:23:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.834 19:23:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.834 19:23:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:14.834 19:23:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:14.834 19:23:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.834 19:23:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.092 19:23:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.092 19:23:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.092 19:23:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:15.092 19:23:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.092 19:23:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.092 19:23:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.092 19:23:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:15.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:14:15.092 00:14:15.092 --- 10.0.0.2 ping statistics --- 00:14:15.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.092 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:14:15.092 19:23:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:14:15.092 00:14:15.092 --- 10.0.0.1 ping statistics --- 00:14:15.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.092 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:15.092 19:23:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.092 19:23:13 -- nvmf/common.sh@410 -- # return 0 00:14:15.092 19:23:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:15.092 19:23:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.092 19:23:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:15.092 19:23:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:15.092 19:23:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.092 19:23:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:15.092 19:23:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:15.092 19:23:13 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:15.092 19:23:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:15.092 19:23:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.092 19:23:13 -- common/autotest_common.sh@10 -- # set +x 00:14:15.092 19:23:13 -- nvmf/common.sh@469 -- # nvmfpid=1161868 00:14:15.092 19:23:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:15.092 19:23:13 -- nvmf/common.sh@470 -- # waitforlisten 1161868 00:14:15.092 19:23:13 -- common/autotest_common.sh@829 -- # '[' -z 1161868 ']' 00:14:15.092 19:23:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.092 19:23:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.092 19:23:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.092 19:23:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.092 19:23:13 -- common/autotest_common.sh@10 -- # set +x 00:14:15.092 [2024-11-17 19:23:13.243216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:15.092 [2024-11-17 19:23:13.243302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.092 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.092 [2024-11-17 19:23:13.309920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:15.350 [2024-11-17 19:23:13.400381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:15.350 [2024-11-17 19:23:13.400538] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.350 [2024-11-17 19:23:13.400556] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.350 [2024-11-17 19:23:13.400569] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.350 [2024-11-17 19:23:13.400660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.350 [2024-11-17 19:23:13.400716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.350 [2024-11-17 19:23:13.400719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.282 19:23:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.282 19:23:14 -- common/autotest_common.sh@862 -- # return 0 00:14:16.282 19:23:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:16.282 19:23:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.282 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:16.282 19:23:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.282 19:23:14 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.282 19:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.282 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:16.282 [2024-11-17 19:23:14.241869] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.282 19:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.282 19:23:14 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:16.282 19:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.282 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:16.282 19:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.282 19:23:14 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.282 19:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.282 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:16.282 [2024-11-17 19:23:14.270841] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.282 19:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.282 19:23:14 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:16.282 19:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.282 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:16.282 NULL1 00:14:16.282 19:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.282 19:23:14 -- target/connect_stress.sh@21 -- # PERF_PID=1161987 00:14:16.282 19:23:14 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:16.282 19:23:14 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:16.282 19:23:14 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.282 19:23:14 -- target/connect_stress.sh@28 -- # cat 00:14:16.282 19:23:14 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:16.282 19:23:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.282 19:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.282 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:16.543 19:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.543 19:23:14 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:16.543 19:23:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.543 19:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.543 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:16.803 19:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.803 19:23:14 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:16.803 19:23:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.803 19:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.803 19:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:17.062 19:23:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.062 19:23:15 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:17.062 19:23:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.062 19:23:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.062 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:14:17.634 19:23:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.634 19:23:15 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:17.634 19:23:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.634 19:23:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.634 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:14:17.895 19:23:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.895 19:23:15 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:17.895 19:23:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.895 19:23:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.895 19:23:15 -- common/autotest_common.sh@10 -- # set +x 00:14:18.156 19:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.156 19:23:16 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:18.156 19:23:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.156 19:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.156 19:23:16 -- common/autotest_common.sh@10 -- # set +x 00:14:18.416 19:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.416 19:23:16 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:18.416 19:23:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.416 19:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.416 19:23:16 -- common/autotest_common.sh@10 -- # set +x 00:14:18.676 19:23:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.676 19:23:16 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:18.676 19:23:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.676 19:23:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.676 19:23:16 -- common/autotest_common.sh@10 -- # set +x 00:14:19.246 19:23:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.246 19:23:17 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:19.246 19:23:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.246 19:23:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.246 19:23:17 -- common/autotest_common.sh@10 -- # set +x 00:14:19.507 19:23:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.507 19:23:17 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:19.507 19:23:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.507 19:23:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.507 19:23:17 -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 19:23:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.765 19:23:17 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:19.765 19:23:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.765 19:23:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.765 19:23:17 -- common/autotest_common.sh@10 -- # set +x 00:14:20.025 19:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.025 19:23:18 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:20.025 19:23:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.025 19:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.025 19:23:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.286 19:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.286 19:23:18 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:20.286 19:23:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.286 19:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.286 19:23:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.855 19:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.855 19:23:18 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:20.855 19:23:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.855 19:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.855 19:23:18 -- common/autotest_common.sh@10 -- # set +x 00:14:21.113 19:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.113 19:23:19 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:21.113 19:23:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.113 19:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.113 19:23:19 -- common/autotest_common.sh@10 -- # set +x 00:14:21.372 19:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.372 19:23:19 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:21.372 19:23:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.372 19:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.372 19:23:19 -- common/autotest_common.sh@10 -- # set +x 00:14:21.632 19:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.632 19:23:19 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:21.632 19:23:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.632 19:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.632 19:23:19 -- common/autotest_common.sh@10 -- # set +x 00:14:21.892 19:23:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.892 19:23:20 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:21.892 19:23:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.892 19:23:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.892 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:14:22.521 19:23:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.521 19:23:20 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:22.521 19:23:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.521 19:23:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.521 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:14:22.521 19:23:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.521 19:23:20 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:22.521 19:23:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.521 19:23:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.521 19:23:20 -- common/autotest_common.sh@10 -- # set +x 00:14:23.115 19:23:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.115 19:23:21 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:23.115 19:23:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.115 19:23:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.115 19:23:21 -- common/autotest_common.sh@10 -- # set +x 00:14:23.381 19:23:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.381 19:23:21 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:23.381 19:23:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.381 19:23:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.381 19:23:21 -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 19:23:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.642 19:23:21 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:23.642 19:23:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.642 19:23:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.642 19:23:21 -- common/autotest_common.sh@10 -- # set +x 00:14:23.900 19:23:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.900 19:23:22 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:23.900 19:23:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.900 19:23:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.900 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.160 19:23:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.160 19:23:22 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:24.160 19:23:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.160 19:23:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.160 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.422 19:23:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.422 19:23:22 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:24.422 19:23:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.422 19:23:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.422 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.988 19:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.988 19:23:23 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:24.988 19:23:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.988 19:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.988 19:23:23 -- common/autotest_common.sh@10 -- # set +x 00:14:25.246 19:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.246 19:23:23 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:25.246 19:23:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.246 19:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.246 19:23:23 -- common/autotest_common.sh@10 -- # set +x 00:14:25.503 19:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.504 19:23:23 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:25.504 19:23:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.504 19:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.504 19:23:23 -- common/autotest_common.sh@10 -- # set +x 00:14:25.763 19:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.763 19:23:23 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:25.763 19:23:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.763 19:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.763 19:23:23 -- common/autotest_common.sh@10 -- # set +x 00:14:26.023 19:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.023 19:23:24 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:26.282 19:23:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.282 19:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.282 19:23:24 -- common/autotest_common.sh@10 -- # set +x 00:14:26.282 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:26.539 19:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.539 19:23:24 -- target/connect_stress.sh@34 -- # kill -0 1161987 00:14:26.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1161987) - No such process 00:14:26.539 19:23:24 -- target/connect_stress.sh@38 -- # wait 1161987 00:14:26.539 19:23:24 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:26.539 19:23:24 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:26.539 19:23:24 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:26.539 19:23:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:26.539 19:23:24 -- nvmf/common.sh@116 -- # sync 00:14:26.539 19:23:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:26.539 19:23:24 -- nvmf/common.sh@119 -- # set +e 00:14:26.539 19:23:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:26.539 19:23:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:26.539 rmmod nvme_tcp 00:14:26.539 rmmod nvme_fabrics 00:14:26.539 rmmod nvme_keyring 00:14:26.539 19:23:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:26.539 19:23:24 -- nvmf/common.sh@123 -- # set -e 00:14:26.539 19:23:24 -- nvmf/common.sh@124 -- # return 0 00:14:26.539 19:23:24 -- nvmf/common.sh@477 -- # '[' -n 1161868 ']' 00:14:26.539 19:23:24 -- nvmf/common.sh@478 -- # killprocess 1161868 00:14:26.539 19:23:24 -- common/autotest_common.sh@936 -- # '[' -z 1161868 ']' 00:14:26.539 19:23:24 -- common/autotest_common.sh@940 -- # kill -0 1161868 00:14:26.539 19:23:24 -- common/autotest_common.sh@941 -- # uname 00:14:26.539 19:23:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:26.539 19:23:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1161868 00:14:26.539 19:23:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:26.539 19:23:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:26.539 19:23:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1161868' 00:14:26.539 killing process with pid 1161868 00:14:26.539 19:23:24 -- common/autotest_common.sh@955 -- # kill 1161868 00:14:26.539 19:23:24 -- common/autotest_common.sh@960 -- # wait 1161868 00:14:26.797 19:23:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:26.797 19:23:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:26.797 19:23:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:26.797 19:23:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.797 19:23:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:26.797 19:23:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.797 19:23:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.797 19:23:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.700 19:23:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:28.700 00:14:28.700 real 0m16.253s 00:14:28.700 user 0m40.903s 00:14:28.700 sys 0m5.890s 00:14:28.700 19:23:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:28.700 19:23:26 -- common/autotest_common.sh@10 -- # set +x 00:14:28.700 ************************************ 00:14:28.700 END TEST nvmf_connect_stress 00:14:28.700 ************************************ 00:14:28.958 19:23:26 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:28.958 19:23:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:28.958 19:23:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.958 19:23:26 -- common/autotest_common.sh@10 -- # set +x 00:14:28.958 ************************************ 00:14:28.958 START TEST nvmf_fused_ordering 00:14:28.958 ************************************ 00:14:28.958 19:23:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:28.958 * Looking for test storage... 00:14:28.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.958 19:23:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:28.958 19:23:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:28.958 19:23:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:28.958 19:23:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:28.958 19:23:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:28.958 19:23:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:28.958 19:23:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:28.958 19:23:27 -- scripts/common.sh@335 -- # IFS=.-: 00:14:28.958 19:23:27 -- scripts/common.sh@335 -- # read -ra ver1 00:14:28.958 19:23:27 -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.958 19:23:27 -- scripts/common.sh@336 -- # read -ra ver2 00:14:28.958 19:23:27 -- scripts/common.sh@337 -- # local 'op=<' 00:14:28.958 19:23:27 -- scripts/common.sh@339 -- # ver1_l=2 00:14:28.958 19:23:27 -- scripts/common.sh@340 -- # ver2_l=1 00:14:28.958 19:23:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:28.958 19:23:27 -- scripts/common.sh@343 -- # case "$op" in 00:14:28.958 19:23:27 -- scripts/common.sh@344 -- # : 1 00:14:28.958 19:23:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:28.959 19:23:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.959 19:23:27 -- scripts/common.sh@364 -- # decimal 1 00:14:28.959 19:23:27 -- scripts/common.sh@352 -- # local d=1 00:14:28.959 19:23:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.959 19:23:27 -- scripts/common.sh@354 -- # echo 1 00:14:28.959 19:23:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:28.959 19:23:27 -- scripts/common.sh@365 -- # decimal 2 00:14:28.959 19:23:27 -- scripts/common.sh@352 -- # local d=2 00:14:28.959 19:23:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.959 19:23:27 -- scripts/common.sh@354 -- # echo 2 00:14:28.959 19:23:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:28.959 19:23:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:28.959 19:23:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:28.959 19:23:27 -- scripts/common.sh@367 -- # return 0 00:14:28.959 19:23:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.959 19:23:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:28.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.959 --rc genhtml_branch_coverage=1 00:14:28.959 --rc genhtml_function_coverage=1 00:14:28.959 --rc genhtml_legend=1 00:14:28.959 --rc geninfo_all_blocks=1 00:14:28.959 --rc geninfo_unexecuted_blocks=1 00:14:28.959 00:14:28.959 ' 00:14:28.959 19:23:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:28.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.959 --rc genhtml_branch_coverage=1 00:14:28.959 --rc genhtml_function_coverage=1 00:14:28.959 --rc genhtml_legend=1 00:14:28.959 --rc geninfo_all_blocks=1 00:14:28.959 --rc geninfo_unexecuted_blocks=1 00:14:28.959 00:14:28.959 ' 00:14:28.959 19:23:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:28.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.959 --rc genhtml_branch_coverage=1 00:14:28.959 --rc genhtml_function_coverage=1 00:14:28.959 --rc genhtml_legend=1 00:14:28.959 --rc geninfo_all_blocks=1 00:14:28.959 --rc geninfo_unexecuted_blocks=1 00:14:28.959 00:14:28.959 ' 00:14:28.959 19:23:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:28.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.959 --rc genhtml_branch_coverage=1 00:14:28.959 --rc genhtml_function_coverage=1 00:14:28.959 --rc genhtml_legend=1 00:14:28.959 --rc geninfo_all_blocks=1 00:14:28.959 --rc geninfo_unexecuted_blocks=1 00:14:28.959 00:14:28.959 ' 00:14:28.959 19:23:27 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.959 19:23:27 -- nvmf/common.sh@7 -- # uname -s 00:14:28.959 19:23:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.959 19:23:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.959 19:23:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.959 19:23:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.959 19:23:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.959 19:23:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.959 19:23:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.959 19:23:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.959 19:23:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.959 19:23:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.959 19:23:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.959 19:23:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.959 19:23:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.959 19:23:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.959 19:23:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.959 19:23:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.959 19:23:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.959 19:23:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.959 19:23:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.959 19:23:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.959 19:23:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.959 19:23:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.959 19:23:27 -- paths/export.sh@5 -- # export PATH 00:14:28.959 19:23:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.959 19:23:27 -- nvmf/common.sh@46 -- # : 0 00:14:28.959 19:23:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.959 19:23:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.959 19:23:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.959 19:23:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.959 19:23:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.959 19:23:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.959 19:23:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.959 19:23:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.959 19:23:27 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:28.959 19:23:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:28.959 19:23:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.959 19:23:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:28.959 19:23:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:28.959 19:23:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:28.959 19:23:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.959 19:23:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.959 19:23:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.959 19:23:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:28.959 19:23:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:28.959 19:23:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:28.959 19:23:27 -- common/autotest_common.sh@10 -- # set +x 00:14:30.865 19:23:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:30.865 19:23:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:30.865 19:23:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:30.865 19:23:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:30.865 19:23:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:30.865 19:23:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:30.865 19:23:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:30.865 19:23:29 -- nvmf/common.sh@294 -- # net_devs=() 00:14:30.865 19:23:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:30.865 19:23:29 -- nvmf/common.sh@295 -- # e810=() 00:14:30.865 19:23:29 -- nvmf/common.sh@295 -- # local -ga e810 00:14:30.865 19:23:29 -- nvmf/common.sh@296 -- # x722=() 00:14:30.866 19:23:29 -- nvmf/common.sh@296 -- # local -ga x722 00:14:30.866 19:23:29 -- nvmf/common.sh@297 -- # mlx=() 00:14:30.866 19:23:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:30.866 19:23:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.866 19:23:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:30.866 19:23:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:30.866 19:23:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:30.866 19:23:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.866 19:23:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:30.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:30.866 19:23:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.866 19:23:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:30.866 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:30.866 19:23:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:30.866 19:23:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:30.866 19:23:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.866 19:23:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.866 19:23:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.866 19:23:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.866 19:23:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:30.866 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:30.866 19:23:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.866 19:23:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.866 19:23:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.866 19:23:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.866 19:23:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.866 19:23:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:30.866 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:30.866 19:23:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.866 19:23:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:30.866 19:23:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:31.124 19:23:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:31.124 19:23:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:31.124 19:23:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:31.124 19:23:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.124 19:23:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.124 19:23:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.124 19:23:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:31.124 19:23:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.124 19:23:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.124 19:23:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:31.124 19:23:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.124 19:23:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.124 19:23:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:31.124 19:23:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:31.124 19:23:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.124 19:23:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.124 19:23:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.124 19:23:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.124 19:23:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:31.124 19:23:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.124 19:23:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.124 19:23:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.124 19:23:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:31.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:14:31.124 00:14:31.124 --- 10.0.0.2 ping statistics --- 00:14:31.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.124 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:14:31.124 19:23:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:14:31.124 00:14:31.124 --- 10.0.0.1 ping statistics --- 00:14:31.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.124 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:31.124 19:23:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.124 19:23:29 -- nvmf/common.sh@410 -- # return 0 00:14:31.124 19:23:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:31.124 19:23:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.124 19:23:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:31.124 19:23:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:31.124 19:23:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.124 19:23:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:31.124 19:23:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:31.124 19:23:29 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:31.124 19:23:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:31.124 19:23:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.124 19:23:29 -- common/autotest_common.sh@10 -- # set +x 00:14:31.125 19:23:29 -- nvmf/common.sh@469 -- # nvmfpid=1165249 00:14:31.125 19:23:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:31.125 19:23:29 -- nvmf/common.sh@470 -- # waitforlisten 1165249 00:14:31.125 19:23:29 -- common/autotest_common.sh@829 -- # '[' -z 1165249 ']' 00:14:31.125 19:23:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.125 19:23:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.125 19:23:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.125 19:23:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.125 19:23:29 -- common/autotest_common.sh@10 -- # set +x 00:14:31.125 [2024-11-17 19:23:29.324968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:31.125 [2024-11-17 19:23:29.325038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.125 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.384 [2024-11-17 19:23:29.391965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.384 [2024-11-17 19:23:29.488037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:31.384 [2024-11-17 19:23:29.488206] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.384 [2024-11-17 19:23:29.488227] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.384 [2024-11-17 19:23:29.488242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.384 [2024-11-17 19:23:29.488277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.320 19:23:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.320 19:23:30 -- common/autotest_common.sh@862 -- # return 0 00:14:32.320 19:23:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:32.320 19:23:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.320 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:32.320 19:23:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.320 19:23:30 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.320 19:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.320 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:32.320 [2024-11-17 19:23:30.368590] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.320 19:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.320 19:23:30 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.320 19:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.320 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:32.320 19:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.320 19:23:30 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.320 19:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.320 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:32.320 [2024-11-17 19:23:30.384844] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.321 19:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.321 19:23:30 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:32.321 19:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.321 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:32.321 NULL1 00:14:32.321 19:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.321 19:23:30 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:32.321 19:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.321 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:32.321 19:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.321 19:23:30 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:32.321 19:23:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.321 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:32.321 19:23:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.321 19:23:30 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:32.321 [2024-11-17 19:23:30.428355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:32.321 [2024-11-17 19:23:30.428396] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165408 ] 00:14:32.321 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.889 Attached to nqn.2016-06.io.spdk:cnode1 00:14:32.889 Namespace ID: 1 size: 1GB 00:14:32.889 fused_ordering(0) 00:14:32.889 fused_ordering(1) 00:14:32.889 fused_ordering(2) 00:14:32.889 fused_ordering(3) 00:14:32.889 fused_ordering(4) 00:14:32.889 fused_ordering(5) 00:14:32.889 fused_ordering(6) 00:14:32.889 fused_ordering(7) 00:14:32.889 fused_ordering(8) 00:14:32.889 fused_ordering(9) 00:14:32.889 fused_ordering(10) 00:14:32.889 fused_ordering(11) 00:14:32.889 fused_ordering(12) 00:14:32.889 fused_ordering(13) 00:14:32.889 fused_ordering(14) 00:14:32.889 fused_ordering(15) 00:14:32.889 fused_ordering(16) 00:14:32.889 fused_ordering(17) 00:14:32.889 fused_ordering(18) 00:14:32.889 fused_ordering(19) 00:14:32.889 fused_ordering(20) 00:14:32.889 fused_ordering(21) 00:14:32.889 fused_ordering(22) 00:14:32.889 fused_ordering(23) 00:14:32.889 fused_ordering(24) 00:14:32.889 fused_ordering(25) 00:14:32.889 fused_ordering(26) 00:14:32.889 fused_ordering(27) 00:14:32.889 fused_ordering(28) 00:14:32.889 fused_ordering(29) 00:14:32.889 fused_ordering(30) 00:14:32.889 fused_ordering(31) 00:14:32.889 fused_ordering(32) 00:14:32.889 fused_ordering(33) 00:14:32.889 fused_ordering(34) 00:14:32.889 fused_ordering(35) 00:14:32.889 fused_ordering(36) 00:14:32.889 fused_ordering(37) 00:14:32.889 fused_ordering(38) 00:14:32.889 fused_ordering(39) 00:14:32.889 fused_ordering(40) 00:14:32.889 fused_ordering(41) 00:14:32.889 fused_ordering(42) 00:14:32.889 fused_ordering(43) 00:14:32.889 fused_ordering(44) 00:14:32.889 fused_ordering(45) 00:14:32.889 fused_ordering(46) 00:14:32.889 fused_ordering(47) 00:14:32.889 fused_ordering(48) 00:14:32.889 fused_ordering(49) 00:14:32.889 fused_ordering(50) 00:14:32.889 fused_ordering(51) 00:14:32.889 fused_ordering(52) 00:14:32.889 fused_ordering(53) 00:14:32.889 fused_ordering(54) 00:14:32.889 fused_ordering(55) 00:14:32.889 fused_ordering(56) 00:14:32.889 fused_ordering(57) 00:14:32.889 fused_ordering(58) 00:14:32.889 fused_ordering(59) 00:14:32.889 fused_ordering(60) 00:14:32.889 fused_ordering(61) 00:14:32.889 fused_ordering(62) 00:14:32.889 fused_ordering(63) 00:14:32.889 fused_ordering(64) 00:14:32.889 fused_ordering(65) 00:14:32.889 fused_ordering(66) 00:14:32.889 fused_ordering(67) 00:14:32.889 fused_ordering(68) 00:14:32.889 fused_ordering(69) 00:14:32.889 fused_ordering(70) 00:14:32.889 fused_ordering(71) 00:14:32.889 fused_ordering(72) 00:14:32.889 fused_ordering(73) 00:14:32.889 fused_ordering(74) 00:14:32.889 fused_ordering(75) 00:14:32.889 fused_ordering(76) 00:14:32.889 fused_ordering(77) 00:14:32.889 fused_ordering(78) 00:14:32.889 fused_ordering(79) 00:14:32.889 fused_ordering(80) 00:14:32.889 fused_ordering(81) 00:14:32.889 fused_ordering(82) 00:14:32.889 fused_ordering(83) 00:14:32.889 fused_ordering(84) 00:14:32.889 fused_ordering(85) 00:14:32.889 fused_ordering(86) 00:14:32.889 fused_ordering(87) 00:14:32.889 fused_ordering(88) 00:14:32.889 fused_ordering(89) 00:14:32.889 fused_ordering(90) 00:14:32.889 fused_ordering(91) 00:14:32.889 fused_ordering(92) 00:14:32.889 fused_ordering(93) 00:14:32.889 fused_ordering(94) 00:14:32.889 fused_ordering(95) 00:14:32.889 fused_ordering(96) 00:14:32.889 fused_ordering(97) 00:14:32.889 fused_ordering(98) 00:14:32.889 fused_ordering(99) 00:14:32.889 fused_ordering(100) 00:14:32.889 fused_ordering(101) 00:14:32.889 fused_ordering(102) 00:14:32.889 fused_ordering(103) 00:14:32.889 fused_ordering(104) 00:14:32.889 fused_ordering(105) 00:14:32.889 fused_ordering(106) 00:14:32.889 fused_ordering(107) 00:14:32.889 fused_ordering(108) 00:14:32.889 fused_ordering(109) 00:14:32.889 fused_ordering(110) 00:14:32.889 fused_ordering(111) 00:14:32.889 fused_ordering(112) 00:14:32.889 fused_ordering(113) 00:14:32.889 fused_ordering(114) 00:14:32.889 fused_ordering(115) 00:14:32.889 fused_ordering(116) 00:14:32.889 fused_ordering(117) 00:14:32.889 fused_ordering(118) 00:14:32.889 fused_ordering(119) 00:14:32.889 fused_ordering(120) 00:14:32.889 fused_ordering(121) 00:14:32.889 fused_ordering(122) 00:14:32.889 fused_ordering(123) 00:14:32.889 fused_ordering(124) 00:14:32.889 fused_ordering(125) 00:14:32.889 fused_ordering(126) 00:14:32.889 fused_ordering(127) 00:14:32.889 fused_ordering(128) 00:14:32.889 fused_ordering(129) 00:14:32.889 fused_ordering(130) 00:14:32.889 fused_ordering(131) 00:14:32.889 fused_ordering(132) 00:14:32.889 fused_ordering(133) 00:14:32.889 fused_ordering(134) 00:14:32.889 fused_ordering(135) 00:14:32.889 fused_ordering(136) 00:14:32.889 fused_ordering(137) 00:14:32.889 fused_ordering(138) 00:14:32.889 fused_ordering(139) 00:14:32.889 fused_ordering(140) 00:14:32.889 fused_ordering(141) 00:14:32.889 fused_ordering(142) 00:14:32.889 fused_ordering(143) 00:14:32.889 fused_ordering(144) 00:14:32.889 fused_ordering(145) 00:14:32.889 fused_ordering(146) 00:14:32.889 fused_ordering(147) 00:14:32.889 fused_ordering(148) 00:14:32.889 fused_ordering(149) 00:14:32.889 fused_ordering(150) 00:14:32.889 fused_ordering(151) 00:14:32.889 fused_ordering(152) 00:14:32.889 fused_ordering(153) 00:14:32.889 fused_ordering(154) 00:14:32.889 fused_ordering(155) 00:14:32.889 fused_ordering(156) 00:14:32.889 fused_ordering(157) 00:14:32.889 fused_ordering(158) 00:14:32.889 fused_ordering(159) 00:14:32.889 fused_ordering(160) 00:14:32.889 fused_ordering(161) 00:14:32.889 fused_ordering(162) 00:14:32.889 fused_ordering(163) 00:14:32.889 fused_ordering(164) 00:14:32.889 fused_ordering(165) 00:14:32.889 fused_ordering(166) 00:14:32.889 fused_ordering(167) 00:14:32.889 fused_ordering(168) 00:14:32.889 fused_ordering(169) 00:14:32.889 fused_ordering(170) 00:14:32.889 fused_ordering(171) 00:14:32.889 fused_ordering(172) 00:14:32.889 fused_ordering(173) 00:14:32.889 fused_ordering(174) 00:14:32.889 fused_ordering(175) 00:14:32.889 fused_ordering(176) 00:14:32.889 fused_ordering(177) 00:14:32.889 fused_ordering(178) 00:14:32.889 fused_ordering(179) 00:14:32.889 fused_ordering(180) 00:14:32.889 fused_ordering(181) 00:14:32.889 fused_ordering(182) 00:14:32.889 fused_ordering(183) 00:14:32.889 fused_ordering(184) 00:14:32.889 fused_ordering(185) 00:14:32.889 fused_ordering(186) 00:14:32.889 fused_ordering(187) 00:14:32.889 fused_ordering(188) 00:14:32.889 fused_ordering(189) 00:14:32.889 fused_ordering(190) 00:14:32.889 fused_ordering(191) 00:14:32.889 fused_ordering(192) 00:14:32.889 fused_ordering(193) 00:14:32.889 fused_ordering(194) 00:14:32.889 fused_ordering(195) 00:14:32.889 fused_ordering(196) 00:14:32.889 fused_ordering(197) 00:14:32.889 fused_ordering(198) 00:14:32.889 fused_ordering(199) 00:14:32.890 fused_ordering(200) 00:14:32.890 fused_ordering(201) 00:14:32.890 fused_ordering(202) 00:14:32.890 fused_ordering(203) 00:14:32.890 fused_ordering(204) 00:14:32.890 fused_ordering(205) 00:14:33.148 fused_ordering(206) 00:14:33.148 fused_ordering(207) 00:14:33.148 fused_ordering(208) 00:14:33.148 fused_ordering(209) 00:14:33.148 fused_ordering(210) 00:14:33.148 fused_ordering(211) 00:14:33.148 fused_ordering(212) 00:14:33.148 fused_ordering(213) 00:14:33.148 fused_ordering(214) 00:14:33.148 fused_ordering(215) 00:14:33.148 fused_ordering(216) 00:14:33.148 fused_ordering(217) 00:14:33.148 fused_ordering(218) 00:14:33.148 fused_ordering(219) 00:14:33.148 fused_ordering(220) 00:14:33.148 fused_ordering(221) 00:14:33.148 fused_ordering(222) 00:14:33.148 fused_ordering(223) 00:14:33.148 fused_ordering(224) 00:14:33.148 fused_ordering(225) 00:14:33.148 fused_ordering(226) 00:14:33.148 fused_ordering(227) 00:14:33.148 fused_ordering(228) 00:14:33.148 fused_ordering(229) 00:14:33.148 fused_ordering(230) 00:14:33.148 fused_ordering(231) 00:14:33.148 fused_ordering(232) 00:14:33.148 fused_ordering(233) 00:14:33.148 fused_ordering(234) 00:14:33.148 fused_ordering(235) 00:14:33.148 fused_ordering(236) 00:14:33.148 fused_ordering(237) 00:14:33.148 fused_ordering(238) 00:14:33.148 fused_ordering(239) 00:14:33.148 fused_ordering(240) 00:14:33.148 fused_ordering(241) 00:14:33.148 fused_ordering(242) 00:14:33.148 fused_ordering(243) 00:14:33.148 fused_ordering(244) 00:14:33.148 fused_ordering(245) 00:14:33.148 fused_ordering(246) 00:14:33.148 fused_ordering(247) 00:14:33.148 fused_ordering(248) 00:14:33.148 fused_ordering(249) 00:14:33.148 fused_ordering(250) 00:14:33.148 fused_ordering(251) 00:14:33.148 fused_ordering(252) 00:14:33.148 fused_ordering(253) 00:14:33.148 fused_ordering(254) 00:14:33.148 fused_ordering(255) 00:14:33.148 fused_ordering(256) 00:14:33.148 fused_ordering(257) 00:14:33.148 fused_ordering(258) 00:14:33.148 fused_ordering(259) 00:14:33.148 fused_ordering(260) 00:14:33.148 fused_ordering(261) 00:14:33.148 fused_ordering(262) 00:14:33.148 fused_ordering(263) 00:14:33.148 fused_ordering(264) 00:14:33.148 fused_ordering(265) 00:14:33.148 fused_ordering(266) 00:14:33.148 fused_ordering(267) 00:14:33.148 fused_ordering(268) 00:14:33.148 fused_ordering(269) 00:14:33.148 fused_ordering(270) 00:14:33.148 fused_ordering(271) 00:14:33.148 fused_ordering(272) 00:14:33.148 fused_ordering(273) 00:14:33.148 fused_ordering(274) 00:14:33.148 fused_ordering(275) 00:14:33.148 fused_ordering(276) 00:14:33.148 fused_ordering(277) 00:14:33.148 fused_ordering(278) 00:14:33.148 fused_ordering(279) 00:14:33.148 fused_ordering(280) 00:14:33.148 fused_ordering(281) 00:14:33.148 fused_ordering(282) 00:14:33.148 fused_ordering(283) 00:14:33.148 fused_ordering(284) 00:14:33.148 fused_ordering(285) 00:14:33.148 fused_ordering(286) 00:14:33.148 fused_ordering(287) 00:14:33.148 fused_ordering(288) 00:14:33.148 fused_ordering(289) 00:14:33.148 fused_ordering(290) 00:14:33.148 fused_ordering(291) 00:14:33.148 fused_ordering(292) 00:14:33.148 fused_ordering(293) 00:14:33.148 fused_ordering(294) 00:14:33.148 fused_ordering(295) 00:14:33.148 fused_ordering(296) 00:14:33.148 fused_ordering(297) 00:14:33.148 fused_ordering(298) 00:14:33.148 fused_ordering(299) 00:14:33.148 fused_ordering(300) 00:14:33.148 fused_ordering(301) 00:14:33.148 fused_ordering(302) 00:14:33.148 fused_ordering(303) 00:14:33.148 fused_ordering(304) 00:14:33.148 fused_ordering(305) 00:14:33.148 fused_ordering(306) 00:14:33.148 fused_ordering(307) 00:14:33.148 fused_ordering(308) 00:14:33.148 fused_ordering(309) 00:14:33.148 fused_ordering(310) 00:14:33.148 fused_ordering(311) 00:14:33.148 fused_ordering(312) 00:14:33.148 fused_ordering(313) 00:14:33.148 fused_ordering(314) 00:14:33.148 fused_ordering(315) 00:14:33.148 fused_ordering(316) 00:14:33.148 fused_ordering(317) 00:14:33.148 fused_ordering(318) 00:14:33.148 fused_ordering(319) 00:14:33.148 fused_ordering(320) 00:14:33.148 fused_ordering(321) 00:14:33.148 fused_ordering(322) 00:14:33.148 fused_ordering(323) 00:14:33.148 fused_ordering(324) 00:14:33.148 fused_ordering(325) 00:14:33.148 fused_ordering(326) 00:14:33.148 fused_ordering(327) 00:14:33.148 fused_ordering(328) 00:14:33.148 fused_ordering(329) 00:14:33.148 fused_ordering(330) 00:14:33.148 fused_ordering(331) 00:14:33.148 fused_ordering(332) 00:14:33.148 fused_ordering(333) 00:14:33.148 fused_ordering(334) 00:14:33.148 fused_ordering(335) 00:14:33.148 fused_ordering(336) 00:14:33.148 fused_ordering(337) 00:14:33.148 fused_ordering(338) 00:14:33.148 fused_ordering(339) 00:14:33.148 fused_ordering(340) 00:14:33.148 fused_ordering(341) 00:14:33.148 fused_ordering(342) 00:14:33.148 fused_ordering(343) 00:14:33.148 fused_ordering(344) 00:14:33.148 fused_ordering(345) 00:14:33.148 fused_ordering(346) 00:14:33.148 fused_ordering(347) 00:14:33.148 fused_ordering(348) 00:14:33.149 fused_ordering(349) 00:14:33.149 fused_ordering(350) 00:14:33.149 fused_ordering(351) 00:14:33.149 fused_ordering(352) 00:14:33.149 fused_ordering(353) 00:14:33.149 fused_ordering(354) 00:14:33.149 fused_ordering(355) 00:14:33.149 fused_ordering(356) 00:14:33.149 fused_ordering(357) 00:14:33.149 fused_ordering(358) 00:14:33.149 fused_ordering(359) 00:14:33.149 fused_ordering(360) 00:14:33.149 fused_ordering(361) 00:14:33.149 fused_ordering(362) 00:14:33.149 fused_ordering(363) 00:14:33.149 fused_ordering(364) 00:14:33.149 fused_ordering(365) 00:14:33.149 fused_ordering(366) 00:14:33.149 fused_ordering(367) 00:14:33.149 fused_ordering(368) 00:14:33.149 fused_ordering(369) 00:14:33.149 fused_ordering(370) 00:14:33.149 fused_ordering(371) 00:14:33.149 fused_ordering(372) 00:14:33.149 fused_ordering(373) 00:14:33.149 fused_ordering(374) 00:14:33.149 fused_ordering(375) 00:14:33.149 fused_ordering(376) 00:14:33.149 fused_ordering(377) 00:14:33.149 fused_ordering(378) 00:14:33.149 fused_ordering(379) 00:14:33.149 fused_ordering(380) 00:14:33.149 fused_ordering(381) 00:14:33.149 fused_ordering(382) 00:14:33.149 fused_ordering(383) 00:14:33.149 fused_ordering(384) 00:14:33.149 fused_ordering(385) 00:14:33.149 fused_ordering(386) 00:14:33.149 fused_ordering(387) 00:14:33.149 fused_ordering(388) 00:14:33.149 fused_ordering(389) 00:14:33.149 fused_ordering(390) 00:14:33.149 fused_ordering(391) 00:14:33.149 fused_ordering(392) 00:14:33.149 fused_ordering(393) 00:14:33.149 fused_ordering(394) 00:14:33.149 fused_ordering(395) 00:14:33.149 fused_ordering(396) 00:14:33.149 fused_ordering(397) 00:14:33.149 fused_ordering(398) 00:14:33.149 fused_ordering(399) 00:14:33.149 fused_ordering(400) 00:14:33.149 fused_ordering(401) 00:14:33.149 fused_ordering(402) 00:14:33.149 fused_ordering(403) 00:14:33.149 fused_ordering(404) 00:14:33.149 fused_ordering(405) 00:14:33.149 fused_ordering(406) 00:14:33.149 fused_ordering(407) 00:14:33.149 fused_ordering(408) 00:14:33.149 fused_ordering(409) 00:14:33.149 fused_ordering(410) 00:14:33.717 fused_ordering(411) 00:14:33.717 fused_ordering(412) 00:14:33.717 fused_ordering(413) 00:14:33.717 fused_ordering(414) 00:14:33.717 fused_ordering(415) 00:14:33.717 fused_ordering(416) 00:14:33.717 fused_ordering(417) 00:14:33.717 fused_ordering(418) 00:14:33.717 fused_ordering(419) 00:14:33.717 fused_ordering(420) 00:14:33.717 fused_ordering(421) 00:14:33.717 fused_ordering(422) 00:14:33.717 fused_ordering(423) 00:14:33.717 fused_ordering(424) 00:14:33.717 fused_ordering(425) 00:14:33.717 fused_ordering(426) 00:14:33.717 fused_ordering(427) 00:14:33.717 fused_ordering(428) 00:14:33.717 fused_ordering(429) 00:14:33.717 fused_ordering(430) 00:14:33.717 fused_ordering(431) 00:14:33.717 fused_ordering(432) 00:14:33.717 fused_ordering(433) 00:14:33.717 fused_ordering(434) 00:14:33.717 fused_ordering(435) 00:14:33.717 fused_ordering(436) 00:14:33.717 fused_ordering(437) 00:14:33.717 fused_ordering(438) 00:14:33.717 fused_ordering(439) 00:14:33.717 fused_ordering(440) 00:14:33.717 fused_ordering(441) 00:14:33.717 fused_ordering(442) 00:14:33.717 fused_ordering(443) 00:14:33.717 fused_ordering(444) 00:14:33.717 fused_ordering(445) 00:14:33.717 fused_ordering(446) 00:14:33.717 fused_ordering(447) 00:14:33.717 fused_ordering(448) 00:14:33.717 fused_ordering(449) 00:14:33.717 fused_ordering(450) 00:14:33.717 fused_ordering(451) 00:14:33.717 fused_ordering(452) 00:14:33.717 fused_ordering(453) 00:14:33.717 fused_ordering(454) 00:14:33.717 fused_ordering(455) 00:14:33.717 fused_ordering(456) 00:14:33.717 fused_ordering(457) 00:14:33.717 fused_ordering(458) 00:14:33.717 fused_ordering(459) 00:14:33.717 fused_ordering(460) 00:14:33.717 fused_ordering(461) 00:14:33.717 fused_ordering(462) 00:14:33.717 fused_ordering(463) 00:14:33.717 fused_ordering(464) 00:14:33.717 fused_ordering(465) 00:14:33.717 fused_ordering(466) 00:14:33.717 fused_ordering(467) 00:14:33.717 fused_ordering(468) 00:14:33.717 fused_ordering(469) 00:14:33.717 fused_ordering(470) 00:14:33.717 fused_ordering(471) 00:14:33.717 fused_ordering(472) 00:14:33.717 fused_ordering(473) 00:14:33.717 fused_ordering(474) 00:14:33.717 fused_ordering(475) 00:14:33.717 fused_ordering(476) 00:14:33.717 fused_ordering(477) 00:14:33.717 fused_ordering(478) 00:14:33.717 fused_ordering(479) 00:14:33.717 fused_ordering(480) 00:14:33.717 fused_ordering(481) 00:14:33.717 fused_ordering(482) 00:14:33.717 fused_ordering(483) 00:14:33.717 fused_ordering(484) 00:14:33.717 fused_ordering(485) 00:14:33.717 fused_ordering(486) 00:14:33.717 fused_ordering(487) 00:14:33.717 fused_ordering(488) 00:14:33.717 fused_ordering(489) 00:14:33.717 fused_ordering(490) 00:14:33.717 fused_ordering(491) 00:14:33.717 fused_ordering(492) 00:14:33.717 fused_ordering(493) 00:14:33.717 fused_ordering(494) 00:14:33.717 fused_ordering(495) 00:14:33.717 fused_ordering(496) 00:14:33.717 fused_ordering(497) 00:14:33.717 fused_ordering(498) 00:14:33.717 fused_ordering(499) 00:14:33.717 fused_ordering(500) 00:14:33.717 fused_ordering(501) 00:14:33.717 fused_ordering(502) 00:14:33.717 fused_ordering(503) 00:14:33.717 fused_ordering(504) 00:14:33.717 fused_ordering(505) 00:14:33.717 fused_ordering(506) 00:14:33.717 fused_ordering(507) 00:14:33.717 fused_ordering(508) 00:14:33.717 fused_ordering(509) 00:14:33.717 fused_ordering(510) 00:14:33.717 fused_ordering(511) 00:14:33.717 fused_ordering(512) 00:14:33.717 fused_ordering(513) 00:14:33.717 fused_ordering(514) 00:14:33.717 fused_ordering(515) 00:14:33.717 fused_ordering(516) 00:14:33.717 fused_ordering(517) 00:14:33.717 fused_ordering(518) 00:14:33.717 fused_ordering(519) 00:14:33.717 fused_ordering(520) 00:14:33.717 fused_ordering(521) 00:14:33.717 fused_ordering(522) 00:14:33.717 fused_ordering(523) 00:14:33.717 fused_ordering(524) 00:14:33.717 fused_ordering(525) 00:14:33.717 fused_ordering(526) 00:14:33.717 fused_ordering(527) 00:14:33.717 fused_ordering(528) 00:14:33.717 fused_ordering(529) 00:14:33.717 fused_ordering(530) 00:14:33.717 fused_ordering(531) 00:14:33.717 fused_ordering(532) 00:14:33.717 fused_ordering(533) 00:14:33.717 fused_ordering(534) 00:14:33.717 fused_ordering(535) 00:14:33.717 fused_ordering(536) 00:14:33.717 fused_ordering(537) 00:14:33.717 fused_ordering(538) 00:14:33.717 fused_ordering(539) 00:14:33.717 fused_ordering(540) 00:14:33.717 fused_ordering(541) 00:14:33.717 fused_ordering(542) 00:14:33.717 fused_ordering(543) 00:14:33.717 fused_ordering(544) 00:14:33.717 fused_ordering(545) 00:14:33.717 fused_ordering(546) 00:14:33.717 fused_ordering(547) 00:14:33.717 fused_ordering(548) 00:14:33.717 fused_ordering(549) 00:14:33.718 fused_ordering(550) 00:14:33.718 fused_ordering(551) 00:14:33.718 fused_ordering(552) 00:14:33.718 fused_ordering(553) 00:14:33.718 fused_ordering(554) 00:14:33.718 fused_ordering(555) 00:14:33.718 fused_ordering(556) 00:14:33.718 fused_ordering(557) 00:14:33.718 fused_ordering(558) 00:14:33.718 fused_ordering(559) 00:14:33.718 fused_ordering(560) 00:14:33.718 fused_ordering(561) 00:14:33.718 fused_ordering(562) 00:14:33.718 fused_ordering(563) 00:14:33.718 fused_ordering(564) 00:14:33.718 fused_ordering(565) 00:14:33.718 fused_ordering(566) 00:14:33.718 fused_ordering(567) 00:14:33.718 fused_ordering(568) 00:14:33.718 fused_ordering(569) 00:14:33.718 fused_ordering(570) 00:14:33.718 fused_ordering(571) 00:14:33.718 fused_ordering(572) 00:14:33.718 fused_ordering(573) 00:14:33.718 fused_ordering(574) 00:14:33.718 fused_ordering(575) 00:14:33.718 fused_ordering(576) 00:14:33.718 fused_ordering(577) 00:14:33.718 fused_ordering(578) 00:14:33.718 fused_ordering(579) 00:14:33.718 fused_ordering(580) 00:14:33.718 fused_ordering(581) 00:14:33.718 fused_ordering(582) 00:14:33.718 fused_ordering(583) 00:14:33.718 fused_ordering(584) 00:14:33.718 fused_ordering(585) 00:14:33.718 fused_ordering(586) 00:14:33.718 fused_ordering(587) 00:14:33.718 fused_ordering(588) 00:14:33.718 fused_ordering(589) 00:14:33.718 fused_ordering(590) 00:14:33.718 fused_ordering(591) 00:14:33.718 fused_ordering(592) 00:14:33.718 fused_ordering(593) 00:14:33.718 fused_ordering(594) 00:14:33.718 fused_ordering(595) 00:14:33.718 fused_ordering(596) 00:14:33.718 fused_ordering(597) 00:14:33.718 fused_ordering(598) 00:14:33.718 fused_ordering(599) 00:14:33.718 fused_ordering(600) 00:14:33.718 fused_ordering(601) 00:14:33.718 fused_ordering(602) 00:14:33.718 fused_ordering(603) 00:14:33.718 fused_ordering(604) 00:14:33.718 fused_ordering(605) 00:14:33.718 fused_ordering(606) 00:14:33.718 fused_ordering(607) 00:14:33.718 fused_ordering(608) 00:14:33.718 fused_ordering(609) 00:14:33.718 fused_ordering(610) 00:14:33.718 fused_ordering(611) 00:14:33.718 fused_ordering(612) 00:14:33.718 fused_ordering(613) 00:14:33.718 fused_ordering(614) 00:14:33.718 fused_ordering(615) 00:14:34.287 fused_ordering(616) 00:14:34.287 fused_ordering(617) 00:14:34.287 fused_ordering(618) 00:14:34.287 fused_ordering(619) 00:14:34.287 fused_ordering(620) 00:14:34.287 fused_ordering(621) 00:14:34.287 fused_ordering(622) 00:14:34.287 fused_ordering(623) 00:14:34.287 fused_ordering(624) 00:14:34.287 fused_ordering(625) 00:14:34.287 fused_ordering(626) 00:14:34.287 fused_ordering(627) 00:14:34.287 fused_ordering(628) 00:14:34.287 fused_ordering(629) 00:14:34.287 fused_ordering(630) 00:14:34.287 fused_ordering(631) 00:14:34.287 fused_ordering(632) 00:14:34.287 fused_ordering(633) 00:14:34.287 fused_ordering(634) 00:14:34.287 fused_ordering(635) 00:14:34.287 fused_ordering(636) 00:14:34.287 fused_ordering(637) 00:14:34.287 fused_ordering(638) 00:14:34.287 fused_ordering(639) 00:14:34.287 fused_ordering(640) 00:14:34.287 fused_ordering(641) 00:14:34.287 fused_ordering(642) 00:14:34.287 fused_ordering(643) 00:14:34.287 fused_ordering(644) 00:14:34.287 fused_ordering(645) 00:14:34.287 fused_ordering(646) 00:14:34.287 fused_ordering(647) 00:14:34.287 fused_ordering(648) 00:14:34.287 fused_ordering(649) 00:14:34.287 fused_ordering(650) 00:14:34.287 fused_ordering(651) 00:14:34.287 fused_ordering(652) 00:14:34.287 fused_ordering(653) 00:14:34.287 fused_ordering(654) 00:14:34.287 fused_ordering(655) 00:14:34.287 fused_ordering(656) 00:14:34.287 fused_ordering(657) 00:14:34.287 fused_ordering(658) 00:14:34.287 fused_ordering(659) 00:14:34.287 fused_ordering(660) 00:14:34.287 fused_ordering(661) 00:14:34.287 fused_ordering(662) 00:14:34.287 fused_ordering(663) 00:14:34.287 fused_ordering(664) 00:14:34.287 fused_ordering(665) 00:14:34.287 fused_ordering(666) 00:14:34.287 fused_ordering(667) 00:14:34.287 fused_ordering(668) 00:14:34.287 fused_ordering(669) 00:14:34.287 fused_ordering(670) 00:14:34.287 fused_ordering(671) 00:14:34.287 fused_ordering(672) 00:14:34.287 fused_ordering(673) 00:14:34.287 fused_ordering(674) 00:14:34.287 fused_ordering(675) 00:14:34.287 fused_ordering(676) 00:14:34.287 fused_ordering(677) 00:14:34.287 fused_ordering(678) 00:14:34.287 fused_ordering(679) 00:14:34.287 fused_ordering(680) 00:14:34.287 fused_ordering(681) 00:14:34.287 fused_ordering(682) 00:14:34.287 fused_ordering(683) 00:14:34.287 fused_ordering(684) 00:14:34.287 fused_ordering(685) 00:14:34.287 fused_ordering(686) 00:14:34.287 fused_ordering(687) 00:14:34.287 fused_ordering(688) 00:14:34.287 fused_ordering(689) 00:14:34.287 fused_ordering(690) 00:14:34.287 fused_ordering(691) 00:14:34.287 fused_ordering(692) 00:14:34.287 fused_ordering(693) 00:14:34.287 fused_ordering(694) 00:14:34.288 fused_ordering(695) 00:14:34.288 fused_ordering(696) 00:14:34.288 fused_ordering(697) 00:14:34.288 fused_ordering(698) 00:14:34.288 fused_ordering(699) 00:14:34.288 fused_ordering(700) 00:14:34.288 fused_ordering(701) 00:14:34.288 fused_ordering(702) 00:14:34.288 fused_ordering(703) 00:14:34.288 fused_ordering(704) 00:14:34.288 fused_ordering(705) 00:14:34.288 fused_ordering(706) 00:14:34.288 fused_ordering(707) 00:14:34.288 fused_ordering(708) 00:14:34.288 fused_ordering(709) 00:14:34.288 fused_ordering(710) 00:14:34.288 fused_ordering(711) 00:14:34.288 fused_ordering(712) 00:14:34.288 fused_ordering(713) 00:14:34.288 fused_ordering(714) 00:14:34.288 fused_ordering(715) 00:14:34.288 fused_ordering(716) 00:14:34.288 fused_ordering(717) 00:14:34.288 fused_ordering(718) 00:14:34.288 fused_ordering(719) 00:14:34.288 fused_ordering(720) 00:14:34.288 fused_ordering(721) 00:14:34.288 fused_ordering(722) 00:14:34.288 fused_ordering(723) 00:14:34.288 fused_ordering(724) 00:14:34.288 fused_ordering(725) 00:14:34.288 fused_ordering(726) 00:14:34.288 fused_ordering(727) 00:14:34.288 fused_ordering(728) 00:14:34.288 fused_ordering(729) 00:14:34.288 fused_ordering(730) 00:14:34.288 fused_ordering(731) 00:14:34.288 fused_ordering(732) 00:14:34.288 fused_ordering(733) 00:14:34.288 fused_ordering(734) 00:14:34.288 fused_ordering(735) 00:14:34.288 fused_ordering(736) 00:14:34.288 fused_ordering(737) 00:14:34.288 fused_ordering(738) 00:14:34.288 fused_ordering(739) 00:14:34.288 fused_ordering(740) 00:14:34.288 fused_ordering(741) 00:14:34.288 fused_ordering(742) 00:14:34.288 fused_ordering(743) 00:14:34.288 fused_ordering(744) 00:14:34.288 fused_ordering(745) 00:14:34.288 fused_ordering(746) 00:14:34.288 fused_ordering(747) 00:14:34.288 fused_ordering(748) 00:14:34.288 fused_ordering(749) 00:14:34.288 fused_ordering(750) 00:14:34.288 fused_ordering(751) 00:14:34.288 fused_ordering(752) 00:14:34.288 fused_ordering(753) 00:14:34.288 fused_ordering(754) 00:14:34.288 fused_ordering(755) 00:14:34.288 fused_ordering(756) 00:14:34.288 fused_ordering(757) 00:14:34.288 fused_ordering(758) 00:14:34.288 fused_ordering(759) 00:14:34.288 fused_ordering(760) 00:14:34.288 fused_ordering(761) 00:14:34.288 fused_ordering(762) 00:14:34.288 fused_ordering(763) 00:14:34.288 fused_ordering(764) 00:14:34.288 fused_ordering(765) 00:14:34.288 fused_ordering(766) 00:14:34.288 fused_ordering(767) 00:14:34.288 fused_ordering(768) 00:14:34.288 fused_ordering(769) 00:14:34.288 fused_ordering(770) 00:14:34.288 fused_ordering(771) 00:14:34.288 fused_ordering(772) 00:14:34.288 fused_ordering(773) 00:14:34.288 fused_ordering(774) 00:14:34.288 fused_ordering(775) 00:14:34.288 fused_ordering(776) 00:14:34.288 fused_ordering(777) 00:14:34.288 fused_ordering(778) 00:14:34.288 fused_ordering(779) 00:14:34.288 fused_ordering(780) 00:14:34.288 fused_ordering(781) 00:14:34.288 fused_ordering(782) 00:14:34.288 fused_ordering(783) 00:14:34.288 fused_ordering(784) 00:14:34.288 fused_ordering(785) 00:14:34.288 fused_ordering(786) 00:14:34.288 fused_ordering(787) 00:14:34.288 fused_ordering(788) 00:14:34.288 fused_ordering(789) 00:14:34.288 fused_ordering(790) 00:14:34.288 fused_ordering(791) 00:14:34.288 fused_ordering(792) 00:14:34.288 fused_ordering(793) 00:14:34.288 fused_ordering(794) 00:14:34.288 fused_ordering(795) 00:14:34.288 fused_ordering(796) 00:14:34.288 fused_ordering(797) 00:14:34.288 fused_ordering(798) 00:14:34.288 fused_ordering(799) 00:14:34.288 fused_ordering(800) 00:14:34.288 fused_ordering(801) 00:14:34.288 fused_ordering(802) 00:14:34.288 fused_ordering(803) 00:14:34.288 fused_ordering(804) 00:14:34.288 fused_ordering(805) 00:14:34.288 fused_ordering(806) 00:14:34.288 fused_ordering(807) 00:14:34.288 fused_ordering(808) 00:14:34.288 fused_ordering(809) 00:14:34.288 fused_ordering(810) 00:14:34.288 fused_ordering(811) 00:14:34.288 fused_ordering(812) 00:14:34.288 fused_ordering(813) 00:14:34.288 fused_ordering(814) 00:14:34.288 fused_ordering(815) 00:14:34.288 fused_ordering(816) 00:14:34.288 fused_ordering(817) 00:14:34.288 fused_ordering(818) 00:14:34.288 fused_ordering(819) 00:14:34.288 fused_ordering(820) 00:14:34.854 fused_ordering(821) 00:14:34.854 fused_ordering(822) 00:14:34.854 fused_ordering(823) 00:14:34.854 fused_ordering(824) 00:14:34.854 fused_ordering(825) 00:14:34.854 fused_ordering(826) 00:14:34.854 fused_ordering(827) 00:14:34.854 fused_ordering(828) 00:14:34.854 fused_ordering(829) 00:14:34.854 fused_ordering(830) 00:14:34.854 fused_ordering(831) 00:14:34.854 fused_ordering(832) 00:14:34.854 fused_ordering(833) 00:14:34.854 fused_ordering(834) 00:14:34.854 fused_ordering(835) 00:14:34.854 fused_ordering(836) 00:14:34.854 fused_ordering(837) 00:14:34.854 fused_ordering(838) 00:14:34.854 fused_ordering(839) 00:14:34.854 fused_ordering(840) 00:14:34.854 fused_ordering(841) 00:14:34.854 fused_ordering(842) 00:14:34.854 fused_ordering(843) 00:14:34.854 fused_ordering(844) 00:14:34.854 fused_ordering(845) 00:14:34.854 fused_ordering(846) 00:14:34.854 fused_ordering(847) 00:14:34.854 fused_ordering(848) 00:14:34.854 fused_ordering(849) 00:14:34.854 fused_ordering(850) 00:14:34.854 fused_ordering(851) 00:14:34.854 fused_ordering(852) 00:14:34.854 fused_ordering(853) 00:14:34.854 fused_ordering(854) 00:14:34.854 fused_ordering(855) 00:14:34.854 fused_ordering(856) 00:14:34.854 fused_ordering(857) 00:14:34.854 fused_ordering(858) 00:14:34.854 fused_ordering(859) 00:14:34.854 fused_ordering(860) 00:14:34.854 fused_ordering(861) 00:14:34.854 fused_ordering(862) 00:14:34.854 fused_ordering(863) 00:14:34.854 fused_ordering(864) 00:14:34.854 fused_ordering(865) 00:14:34.854 fused_ordering(866) 00:14:34.854 fused_ordering(867) 00:14:34.854 fused_ordering(868) 00:14:34.854 fused_ordering(869) 00:14:34.854 fused_ordering(870) 00:14:34.854 fused_ordering(871) 00:14:34.854 fused_ordering(872) 00:14:34.854 fused_ordering(873) 00:14:34.854 fused_ordering(874) 00:14:34.854 fused_ordering(875) 00:14:34.854 fused_ordering(876) 00:14:34.854 fused_ordering(877) 00:14:34.854 fused_ordering(878) 00:14:34.854 fused_ordering(879) 00:14:34.854 fused_ordering(880) 00:14:34.854 fused_ordering(881) 00:14:34.854 fused_ordering(882) 00:14:34.854 fused_ordering(883) 00:14:34.854 fused_ordering(884) 00:14:34.854 fused_ordering(885) 00:14:34.854 fused_ordering(886) 00:14:34.854 fused_ordering(887) 00:14:34.854 fused_ordering(888) 00:14:34.854 fused_ordering(889) 00:14:34.854 fused_ordering(890) 00:14:34.854 fused_ordering(891) 00:14:34.854 fused_ordering(892) 00:14:34.854 fused_ordering(893) 00:14:34.854 fused_ordering(894) 00:14:34.854 fused_ordering(895) 00:14:34.854 fused_ordering(896) 00:14:34.854 fused_ordering(897) 00:14:34.854 fused_ordering(898) 00:14:34.854 fused_ordering(899) 00:14:34.854 fused_ordering(900) 00:14:34.854 fused_ordering(901) 00:14:34.854 fused_ordering(902) 00:14:34.854 fused_ordering(903) 00:14:34.854 fused_ordering(904) 00:14:34.854 fused_ordering(905) 00:14:34.854 fused_ordering(906) 00:14:34.854 fused_ordering(907) 00:14:34.854 fused_ordering(908) 00:14:34.854 fused_ordering(909) 00:14:34.854 fused_ordering(910) 00:14:34.854 fused_ordering(911) 00:14:34.854 fused_ordering(912) 00:14:34.854 fused_ordering(913) 00:14:34.854 fused_ordering(914) 00:14:34.854 fused_ordering(915) 00:14:34.854 fused_ordering(916) 00:14:34.854 fused_ordering(917) 00:14:34.854 fused_ordering(918) 00:14:34.854 fused_ordering(919) 00:14:34.854 fused_ordering(920) 00:14:34.854 fused_ordering(921) 00:14:34.854 fused_ordering(922) 00:14:34.854 fused_ordering(923) 00:14:34.854 fused_ordering(924) 00:14:34.854 fused_ordering(925) 00:14:34.854 fused_ordering(926) 00:14:34.854 fused_ordering(927) 00:14:34.854 fused_ordering(928) 00:14:34.854 fused_ordering(929) 00:14:34.854 fused_ordering(930) 00:14:34.854 fused_ordering(931) 00:14:34.854 fused_ordering(932) 00:14:34.854 fused_ordering(933) 00:14:34.854 fused_ordering(934) 00:14:34.854 fused_ordering(935) 00:14:34.854 fused_ordering(936) 00:14:34.854 fused_ordering(937) 00:14:34.854 fused_ordering(938) 00:14:34.854 fused_ordering(939) 00:14:34.854 fused_ordering(940) 00:14:34.854 fused_ordering(941) 00:14:34.854 fused_ordering(942) 00:14:34.854 fused_ordering(943) 00:14:34.854 fused_ordering(944) 00:14:34.854 fused_ordering(945) 00:14:34.854 fused_ordering(946) 00:14:34.854 fused_ordering(947) 00:14:34.854 fused_ordering(948) 00:14:34.854 fused_ordering(949) 00:14:34.854 fused_ordering(950) 00:14:34.854 fused_ordering(951) 00:14:34.854 fused_ordering(952) 00:14:34.854 fused_ordering(953) 00:14:34.854 fused_ordering(954) 00:14:34.854 fused_ordering(955) 00:14:34.854 fused_ordering(956) 00:14:34.854 fused_ordering(957) 00:14:34.854 fused_ordering(958) 00:14:34.854 fused_ordering(959) 00:14:34.854 fused_ordering(960) 00:14:34.854 fused_ordering(961) 00:14:34.854 fused_ordering(962) 00:14:34.854 fused_ordering(963) 00:14:34.854 fused_ordering(964) 00:14:34.854 fused_ordering(965) 00:14:34.854 fused_ordering(966) 00:14:34.854 fused_ordering(967) 00:14:34.854 fused_ordering(968) 00:14:34.854 fused_ordering(969) 00:14:34.854 fused_ordering(970) 00:14:34.854 fused_ordering(971) 00:14:34.854 fused_ordering(972) 00:14:34.855 fused_ordering(973) 00:14:34.855 fused_ordering(974) 00:14:34.855 fused_ordering(975) 00:14:34.855 fused_ordering(976) 00:14:34.855 fused_ordering(977) 00:14:34.855 fused_ordering(978) 00:14:34.855 fused_ordering(979) 00:14:34.855 fused_ordering(980) 00:14:34.855 fused_ordering(981) 00:14:34.855 fused_ordering(982) 00:14:34.855 fused_ordering(983) 00:14:34.855 fused_ordering(984) 00:14:34.855 fused_ordering(985) 00:14:34.855 fused_ordering(986) 00:14:34.855 fused_ordering(987) 00:14:34.855 fused_ordering(988) 00:14:34.855 fused_ordering(989) 00:14:34.855 fused_ordering(990) 00:14:34.855 fused_ordering(991) 00:14:34.855 fused_ordering(992) 00:14:34.855 fused_ordering(993) 00:14:34.855 fused_ordering(994) 00:14:34.855 fused_ordering(995) 00:14:34.855 fused_ordering(996) 00:14:34.855 fused_ordering(997) 00:14:34.855 fused_ordering(998) 00:14:34.855 fused_ordering(999) 00:14:34.855 fused_ordering(1000) 00:14:34.855 fused_ordering(1001) 00:14:34.855 fused_ordering(1002) 00:14:34.855 fused_ordering(1003) 00:14:34.855 fused_ordering(1004) 00:14:34.855 fused_ordering(1005) 00:14:34.855 fused_ordering(1006) 00:14:34.855 fused_ordering(1007) 00:14:34.855 fused_ordering(1008) 00:14:34.855 fused_ordering(1009) 00:14:34.855 fused_ordering(1010) 00:14:34.855 fused_ordering(1011) 00:14:34.855 fused_ordering(1012) 00:14:34.855 fused_ordering(1013) 00:14:34.855 fused_ordering(1014) 00:14:34.855 fused_ordering(1015) 00:14:34.855 fused_ordering(1016) 00:14:34.855 fused_ordering(1017) 00:14:34.855 fused_ordering(1018) 00:14:34.855 fused_ordering(1019) 00:14:34.855 fused_ordering(1020) 00:14:34.855 fused_ordering(1021) 00:14:34.855 fused_ordering(1022) 00:14:34.855 fused_ordering(1023) 00:14:34.855 19:23:32 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:34.855 19:23:32 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:34.855 19:23:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:34.855 19:23:32 -- nvmf/common.sh@116 -- # sync 00:14:34.855 19:23:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:34.855 19:23:32 -- nvmf/common.sh@119 -- # set +e 00:14:34.855 19:23:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:34.855 19:23:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:34.855 rmmod nvme_tcp 00:14:34.855 rmmod nvme_fabrics 00:14:34.855 rmmod nvme_keyring 00:14:34.855 19:23:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:34.855 19:23:33 -- nvmf/common.sh@123 -- # set -e 00:14:34.855 19:23:33 -- nvmf/common.sh@124 -- # return 0 00:14:34.855 19:23:33 -- nvmf/common.sh@477 -- # '[' -n 1165249 ']' 00:14:34.855 19:23:33 -- nvmf/common.sh@478 -- # killprocess 1165249 00:14:34.855 19:23:33 -- common/autotest_common.sh@936 -- # '[' -z 1165249 ']' 00:14:34.855 19:23:33 -- common/autotest_common.sh@940 -- # kill -0 1165249 00:14:34.855 19:23:33 -- common/autotest_common.sh@941 -- # uname 00:14:34.855 19:23:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:34.855 19:23:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1165249 00:14:34.855 19:23:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:34.855 19:23:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:34.855 19:23:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1165249' 00:14:34.855 killing process with pid 1165249 00:14:34.855 19:23:33 -- common/autotest_common.sh@955 -- # kill 1165249 00:14:34.855 19:23:33 -- common/autotest_common.sh@960 -- # wait 1165249 00:14:35.112 19:23:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:35.112 19:23:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:35.112 19:23:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:35.112 19:23:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.112 19:23:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:35.112 19:23:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.112 19:23:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.112 19:23:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.673 19:23:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:37.673 00:14:37.673 real 0m8.332s 00:14:37.673 user 0m6.377s 00:14:37.673 sys 0m3.120s 00:14:37.673 19:23:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:37.673 19:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:37.673 ************************************ 00:14:37.673 END TEST nvmf_fused_ordering 00:14:37.673 ************************************ 00:14:37.673 19:23:35 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:37.673 19:23:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:37.673 19:23:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:37.673 19:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:37.673 ************************************ 00:14:37.673 START TEST nvmf_delete_subsystem 00:14:37.673 ************************************ 00:14:37.673 19:23:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:37.673 * Looking for test storage... 00:14:37.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.673 19:23:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:37.673 19:23:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:37.673 19:23:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:37.673 19:23:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:37.673 19:23:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:37.673 19:23:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:37.673 19:23:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:37.673 19:23:35 -- scripts/common.sh@335 -- # IFS=.-: 00:14:37.673 19:23:35 -- scripts/common.sh@335 -- # read -ra ver1 00:14:37.673 19:23:35 -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.673 19:23:35 -- scripts/common.sh@336 -- # read -ra ver2 00:14:37.673 19:23:35 -- scripts/common.sh@337 -- # local 'op=<' 00:14:37.673 19:23:35 -- scripts/common.sh@339 -- # ver1_l=2 00:14:37.673 19:23:35 -- scripts/common.sh@340 -- # ver2_l=1 00:14:37.673 19:23:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:37.673 19:23:35 -- scripts/common.sh@343 -- # case "$op" in 00:14:37.673 19:23:35 -- scripts/common.sh@344 -- # : 1 00:14:37.673 19:23:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:37.673 19:23:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.673 19:23:35 -- scripts/common.sh@364 -- # decimal 1 00:14:37.673 19:23:35 -- scripts/common.sh@352 -- # local d=1 00:14:37.673 19:23:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.673 19:23:35 -- scripts/common.sh@354 -- # echo 1 00:14:37.673 19:23:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:37.673 19:23:35 -- scripts/common.sh@365 -- # decimal 2 00:14:37.673 19:23:35 -- scripts/common.sh@352 -- # local d=2 00:14:37.673 19:23:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.673 19:23:35 -- scripts/common.sh@354 -- # echo 2 00:14:37.673 19:23:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:37.673 19:23:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:37.673 19:23:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:37.673 19:23:35 -- scripts/common.sh@367 -- # return 0 00:14:37.673 19:23:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.673 19:23:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:37.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.673 --rc genhtml_branch_coverage=1 00:14:37.673 --rc genhtml_function_coverage=1 00:14:37.673 --rc genhtml_legend=1 00:14:37.673 --rc geninfo_all_blocks=1 00:14:37.673 --rc geninfo_unexecuted_blocks=1 00:14:37.673 00:14:37.673 ' 00:14:37.673 19:23:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:37.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.673 --rc genhtml_branch_coverage=1 00:14:37.673 --rc genhtml_function_coverage=1 00:14:37.673 --rc genhtml_legend=1 00:14:37.673 --rc geninfo_all_blocks=1 00:14:37.673 --rc geninfo_unexecuted_blocks=1 00:14:37.673 00:14:37.673 ' 00:14:37.673 19:23:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:37.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.673 --rc genhtml_branch_coverage=1 00:14:37.673 --rc genhtml_function_coverage=1 00:14:37.673 --rc genhtml_legend=1 00:14:37.673 --rc geninfo_all_blocks=1 00:14:37.673 --rc geninfo_unexecuted_blocks=1 00:14:37.673 00:14:37.673 ' 00:14:37.673 19:23:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:37.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.673 --rc genhtml_branch_coverage=1 00:14:37.673 --rc genhtml_function_coverage=1 00:14:37.673 --rc genhtml_legend=1 00:14:37.673 --rc geninfo_all_blocks=1 00:14:37.673 --rc geninfo_unexecuted_blocks=1 00:14:37.673 00:14:37.673 ' 00:14:37.673 19:23:35 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.673 19:23:35 -- nvmf/common.sh@7 -- # uname -s 00:14:37.673 19:23:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.673 19:23:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.673 19:23:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.673 19:23:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.673 19:23:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.673 19:23:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.673 19:23:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.673 19:23:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.673 19:23:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.673 19:23:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.673 19:23:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.673 19:23:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.673 19:23:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.673 19:23:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.673 19:23:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.673 19:23:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.673 19:23:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.673 19:23:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.673 19:23:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.673 19:23:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.674 19:23:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.674 19:23:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.674 19:23:35 -- paths/export.sh@5 -- # export PATH 00:14:37.674 19:23:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.674 19:23:35 -- nvmf/common.sh@46 -- # : 0 00:14:37.674 19:23:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:37.674 19:23:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:37.674 19:23:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:37.674 19:23:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.674 19:23:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.674 19:23:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:37.674 19:23:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:37.674 19:23:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:37.674 19:23:35 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:37.674 19:23:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:37.674 19:23:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.674 19:23:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:37.674 19:23:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:37.674 19:23:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:37.674 19:23:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.674 19:23:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.674 19:23:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.674 19:23:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:37.674 19:23:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:37.674 19:23:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:37.674 19:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:39.573 19:23:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:39.573 19:23:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:39.573 19:23:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:39.573 19:23:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:39.573 19:23:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:39.573 19:23:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:39.573 19:23:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:39.573 19:23:37 -- nvmf/common.sh@294 -- # net_devs=() 00:14:39.573 19:23:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:39.573 19:23:37 -- nvmf/common.sh@295 -- # e810=() 00:14:39.573 19:23:37 -- nvmf/common.sh@295 -- # local -ga e810 00:14:39.573 19:23:37 -- nvmf/common.sh@296 -- # x722=() 00:14:39.573 19:23:37 -- nvmf/common.sh@296 -- # local -ga x722 00:14:39.573 19:23:37 -- nvmf/common.sh@297 -- # mlx=() 00:14:39.573 19:23:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:39.573 19:23:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.573 19:23:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:39.573 19:23:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:39.573 19:23:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:39.573 19:23:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:39.573 19:23:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:39.573 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:39.573 19:23:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:39.573 19:23:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:39.573 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:39.573 19:23:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:39.573 19:23:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:39.573 19:23:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.573 19:23:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:39.573 19:23:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.573 19:23:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:39.573 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:39.573 19:23:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.573 19:23:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:39.573 19:23:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.573 19:23:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:39.573 19:23:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.573 19:23:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:39.573 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:39.573 19:23:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.573 19:23:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:39.573 19:23:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:39.573 19:23:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:39.573 19:23:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:39.573 19:23:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.573 19:23:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.573 19:23:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.573 19:23:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:39.573 19:23:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.573 19:23:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.573 19:23:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:39.573 19:23:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.574 19:23:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.574 19:23:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:39.574 19:23:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:39.574 19:23:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.574 19:23:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.574 19:23:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.574 19:23:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.574 19:23:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:39.574 19:23:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.574 19:23:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.574 19:23:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.574 19:23:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:39.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:14:39.574 00:14:39.574 --- 10.0.0.2 ping statistics --- 00:14:39.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.574 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:39.574 19:23:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:14:39.574 00:14:39.574 --- 10.0.0.1 ping statistics --- 00:14:39.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.574 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:39.574 19:23:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.574 19:23:37 -- nvmf/common.sh@410 -- # return 0 00:14:39.574 19:23:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:39.574 19:23:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.574 19:23:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:39.574 19:23:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:39.574 19:23:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.574 19:23:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:39.574 19:23:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:39.574 19:23:37 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:39.574 19:23:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:39.574 19:23:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.574 19:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.574 19:23:37 -- nvmf/common.sh@469 -- # nvmfpid=1167633 00:14:39.574 19:23:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:39.574 19:23:37 -- nvmf/common.sh@470 -- # waitforlisten 1167633 00:14:39.574 19:23:37 -- common/autotest_common.sh@829 -- # '[' -z 1167633 ']' 00:14:39.574 19:23:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.574 19:23:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.574 19:23:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.574 19:23:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.574 19:23:37 -- common/autotest_common.sh@10 -- # set +x 00:14:39.574 [2024-11-17 19:23:37.809614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:39.574 [2024-11-17 19:23:37.809701] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.833 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.833 [2024-11-17 19:23:37.879738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:39.833 [2024-11-17 19:23:37.969329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:39.833 [2024-11-17 19:23:37.969514] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.833 [2024-11-17 19:23:37.969534] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.833 [2024-11-17 19:23:37.969549] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.833 [2024-11-17 19:23:37.969626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.833 [2024-11-17 19:23:37.969632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.771 19:23:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.771 19:23:38 -- common/autotest_common.sh@862 -- # return 0 00:14:40.771 19:23:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:40.771 19:23:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.771 19:23:38 -- common/autotest_common.sh@10 -- # set +x 00:14:40.771 19:23:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.771 19:23:38 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:40.771 19:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.771 19:23:38 -- common/autotest_common.sh@10 -- # set +x 00:14:40.771 [2024-11-17 19:23:38.805901] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.771 19:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.771 19:23:38 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:40.771 19:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.771 19:23:38 -- common/autotest_common.sh@10 -- # set +x 00:14:40.771 19:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.771 19:23:38 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.771 19:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.771 19:23:38 -- common/autotest_common.sh@10 -- # set +x 00:14:40.771 [2024-11-17 19:23:38.822147] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.771 19:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.771 19:23:38 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:40.771 19:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.771 19:23:38 -- common/autotest_common.sh@10 -- # set +x 00:14:40.771 NULL1 00:14:40.771 19:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.771 19:23:38 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:40.771 19:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.771 19:23:38 -- common/autotest_common.sh@10 -- # set +x 00:14:40.771 Delay0 00:14:40.771 19:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.771 19:23:38 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.771 19:23:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.771 19:23:38 -- common/autotest_common.sh@10 -- # set +x 00:14:40.771 19:23:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.771 19:23:38 -- target/delete_subsystem.sh@28 -- # perf_pid=1167793 00:14:40.771 19:23:38 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:40.771 19:23:38 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:40.771 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.771 [2024-11-17 19:23:38.896809] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:42.683 19:23:40 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.683 19:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.683 19:23:40 -- common/autotest_common.sh@10 -- # set +x 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 [2024-11-17 19:23:41.017939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6579e0 is same with the state(5) to be set 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 starting I/O failed: -6 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Write completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.943 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 starting I/O failed: -6 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 starting I/O failed: -6 00:14:42.944 [2024-11-17 19:23:41.018604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c6c00c1d0 is same with the state(5) to be set 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:42.944 Read completed with error (sct=0, sc=8) 00:14:42.944 Write completed with error (sct=0, sc=8) 00:14:43.880 [2024-11-17 19:23:41.992302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6599c0 is same with the state(5) to be set 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Write completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 [2024-11-17 19:23:42.016220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c6c00c480 is same with the state(5) to be set 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Write completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Write completed with error (sct=0, sc=8) 00:14:43.880 Write completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 [2024-11-17 19:23:42.016448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c6c00bf20 is same with the state(5) to be set 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Write completed with error (sct=0, sc=8) 00:14:43.880 Write completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Write completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Write completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Read completed with error (sct=0, sc=8) 00:14:43.880 Write completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Write completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Write completed with error (sct=0, sc=8) 00:14:43.881 Write completed with error (sct=0, sc=8) 00:14:43.881 Write completed with error (sct=0, sc=8) 00:14:43.881 [2024-11-17 19:23:42.020759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658240 is same with the state(5) to be set 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Write completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Write completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Write completed with error (sct=0, sc=8) 00:14:43.881 Write completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 Write completed with error (sct=0, sc=8) 00:14:43.881 Read completed with error (sct=0, sc=8) 00:14:43.881 [2024-11-17 19:23:42.021586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657ce0 is same with the state(5) to be set 00:14:43.881 [2024-11-17 19:23:42.022533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6599c0 (9): Bad file descriptor 00:14:43.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:43.881 19:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.881 19:23:42 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:43.881 19:23:42 -- target/delete_subsystem.sh@35 -- # kill -0 1167793 00:14:43.881 19:23:42 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:43.881 Initializing NVMe Controllers 00:14:43.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.881 Controller IO queue size 128, less than required. 00:14:43.881 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:43.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:43.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:43.881 Initialization complete. Launching workers. 00:14:43.881 ======================================================== 00:14:43.881 Latency(us) 00:14:43.881 Device Information : IOPS MiB/s Average min max 00:14:43.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.79 0.08 896727.34 651.95 1011366.80 00:14:43.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 141.98 0.07 968082.20 335.21 1013750.92 00:14:43.881 ======================================================== 00:14:43.881 Total : 310.78 0.15 929327.16 335.21 1013750.92 00:14:43.881 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@35 -- # kill -0 1167793 00:14:44.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1167793) - No such process 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@45 -- # NOT wait 1167793 00:14:44.450 19:23:42 -- common/autotest_common.sh@650 -- # local es=0 00:14:44.450 19:23:42 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1167793 00:14:44.450 19:23:42 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:44.450 19:23:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.450 19:23:42 -- common/autotest_common.sh@642 -- # type -t wait 00:14:44.450 19:23:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.450 19:23:42 -- common/autotest_common.sh@653 -- # wait 1167793 00:14:44.450 19:23:42 -- common/autotest_common.sh@653 -- # es=1 00:14:44.450 19:23:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:44.450 19:23:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:44.450 19:23:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:44.450 19:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.450 19:23:42 -- common/autotest_common.sh@10 -- # set +x 00:14:44.450 19:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.450 19:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.450 19:23:42 -- common/autotest_common.sh@10 -- # set +x 00:14:44.450 [2024-11-17 19:23:42.546741] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.450 19:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.450 19:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.450 19:23:42 -- common/autotest_common.sh@10 -- # set +x 00:14:44.450 19:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@54 -- # perf_pid=1168319 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@57 -- # kill -0 1168319 00:14:44.450 19:23:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.450 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.450 [2024-11-17 19:23:42.608368] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:45.014 19:23:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.014 19:23:43 -- target/delete_subsystem.sh@57 -- # kill -0 1168319 00:14:45.014 19:23:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.584 19:23:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.584 19:23:43 -- target/delete_subsystem.sh@57 -- # kill -0 1168319 00:14:45.584 19:23:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.844 19:23:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.844 19:23:44 -- target/delete_subsystem.sh@57 -- # kill -0 1168319 00:14:45.844 19:23:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.412 19:23:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:46.412 19:23:44 -- target/delete_subsystem.sh@57 -- # kill -0 1168319 00:14:46.412 19:23:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.980 19:23:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:46.980 19:23:45 -- target/delete_subsystem.sh@57 -- # kill -0 1168319 00:14:46.980 19:23:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:47.550 19:23:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:47.550 19:23:45 -- target/delete_subsystem.sh@57 -- # kill -0 1168319 00:14:47.550 19:23:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:47.809 Initializing NVMe Controllers 00:14:47.809 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.809 Controller IO queue size 128, less than required. 00:14:47.809 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:47.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:47.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:47.809 Initialization complete. Launching workers. 00:14:47.809 ======================================================== 00:14:47.809 Latency(us) 00:14:47.809 Device Information : IOPS MiB/s Average min max 00:14:47.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003303.63 1000160.57 1010773.10 00:14:47.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004010.70 1000155.02 1012183.33 00:14:47.809 ======================================================== 00:14:47.809 Total : 256.00 0.12 1003657.16 1000155.02 1012183.33 00:14:47.809 00:14:48.069 19:23:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:48.069 19:23:46 -- target/delete_subsystem.sh@57 -- # kill -0 1168319 00:14:48.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1168319) - No such process 00:14:48.069 19:23:46 -- target/delete_subsystem.sh@67 -- # wait 1168319 00:14:48.069 19:23:46 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:48.069 19:23:46 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:48.069 19:23:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:48.069 19:23:46 -- nvmf/common.sh@116 -- # sync 00:14:48.069 19:23:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:48.069 19:23:46 -- nvmf/common.sh@119 -- # set +e 00:14:48.069 19:23:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:48.069 19:23:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:48.069 rmmod nvme_tcp 00:14:48.069 rmmod nvme_fabrics 00:14:48.069 rmmod nvme_keyring 00:14:48.069 19:23:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:48.069 19:23:46 -- nvmf/common.sh@123 -- # set -e 00:14:48.069 19:23:46 -- nvmf/common.sh@124 -- # return 0 00:14:48.069 19:23:46 -- nvmf/common.sh@477 -- # '[' -n 1167633 ']' 00:14:48.069 19:23:46 -- nvmf/common.sh@478 -- # killprocess 1167633 00:14:48.069 19:23:46 -- common/autotest_common.sh@936 -- # '[' -z 1167633 ']' 00:14:48.069 19:23:46 -- common/autotest_common.sh@940 -- # kill -0 1167633 00:14:48.069 19:23:46 -- common/autotest_common.sh@941 -- # uname 00:14:48.069 19:23:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.069 19:23:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1167633 00:14:48.069 19:23:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:48.069 19:23:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:48.070 19:23:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1167633' 00:14:48.070 killing process with pid 1167633 00:14:48.070 19:23:46 -- common/autotest_common.sh@955 -- # kill 1167633 00:14:48.070 19:23:46 -- common/autotest_common.sh@960 -- # wait 1167633 00:14:48.329 19:23:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:48.329 19:23:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:48.329 19:23:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:48.329 19:23:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.329 19:23:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:48.329 19:23:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.329 19:23:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.329 19:23:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.236 19:23:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:50.236 00:14:50.236 real 0m13.127s 00:14:50.236 user 0m29.356s 00:14:50.236 sys 0m3.044s 00:14:50.236 19:23:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:50.236 19:23:48 -- common/autotest_common.sh@10 -- # set +x 00:14:50.236 ************************************ 00:14:50.236 END TEST nvmf_delete_subsystem 00:14:50.236 ************************************ 00:14:50.236 19:23:48 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:14:50.236 19:23:48 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:50.237 19:23:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:50.237 19:23:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.237 19:23:48 -- common/autotest_common.sh@10 -- # set +x 00:14:50.237 ************************************ 00:14:50.237 START TEST nvmf_nvme_cli 00:14:50.237 ************************************ 00:14:50.237 19:23:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:50.498 * Looking for test storage... 00:14:50.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.498 19:23:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:50.498 19:23:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:50.498 19:23:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:50.498 19:23:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:50.498 19:23:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:50.498 19:23:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:50.498 19:23:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:50.498 19:23:48 -- scripts/common.sh@335 -- # IFS=.-: 00:14:50.498 19:23:48 -- scripts/common.sh@335 -- # read -ra ver1 00:14:50.498 19:23:48 -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.498 19:23:48 -- scripts/common.sh@336 -- # read -ra ver2 00:14:50.498 19:23:48 -- scripts/common.sh@337 -- # local 'op=<' 00:14:50.498 19:23:48 -- scripts/common.sh@339 -- # ver1_l=2 00:14:50.498 19:23:48 -- scripts/common.sh@340 -- # ver2_l=1 00:14:50.498 19:23:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:50.498 19:23:48 -- scripts/common.sh@343 -- # case "$op" in 00:14:50.498 19:23:48 -- scripts/common.sh@344 -- # : 1 00:14:50.498 19:23:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:50.498 19:23:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.498 19:23:48 -- scripts/common.sh@364 -- # decimal 1 00:14:50.498 19:23:48 -- scripts/common.sh@352 -- # local d=1 00:14:50.498 19:23:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.498 19:23:48 -- scripts/common.sh@354 -- # echo 1 00:14:50.498 19:23:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:50.498 19:23:48 -- scripts/common.sh@365 -- # decimal 2 00:14:50.498 19:23:48 -- scripts/common.sh@352 -- # local d=2 00:14:50.498 19:23:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.498 19:23:48 -- scripts/common.sh@354 -- # echo 2 00:14:50.498 19:23:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:50.498 19:23:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:50.498 19:23:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:50.498 19:23:48 -- scripts/common.sh@367 -- # return 0 00:14:50.498 19:23:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.498 19:23:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:50.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.498 --rc genhtml_branch_coverage=1 00:14:50.498 --rc genhtml_function_coverage=1 00:14:50.498 --rc genhtml_legend=1 00:14:50.498 --rc geninfo_all_blocks=1 00:14:50.498 --rc geninfo_unexecuted_blocks=1 00:14:50.498 00:14:50.498 ' 00:14:50.498 19:23:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:50.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.498 --rc genhtml_branch_coverage=1 00:14:50.498 --rc genhtml_function_coverage=1 00:14:50.498 --rc genhtml_legend=1 00:14:50.498 --rc geninfo_all_blocks=1 00:14:50.498 --rc geninfo_unexecuted_blocks=1 00:14:50.498 00:14:50.498 ' 00:14:50.498 19:23:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:50.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.498 --rc genhtml_branch_coverage=1 00:14:50.498 --rc genhtml_function_coverage=1 00:14:50.498 --rc genhtml_legend=1 00:14:50.498 --rc geninfo_all_blocks=1 00:14:50.498 --rc geninfo_unexecuted_blocks=1 00:14:50.498 00:14:50.498 ' 00:14:50.498 19:23:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:50.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.498 --rc genhtml_branch_coverage=1 00:14:50.498 --rc genhtml_function_coverage=1 00:14:50.498 --rc genhtml_legend=1 00:14:50.498 --rc geninfo_all_blocks=1 00:14:50.498 --rc geninfo_unexecuted_blocks=1 00:14:50.498 00:14:50.498 ' 00:14:50.498 19:23:48 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.498 19:23:48 -- nvmf/common.sh@7 -- # uname -s 00:14:50.498 19:23:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.498 19:23:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.498 19:23:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.498 19:23:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.498 19:23:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.498 19:23:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.498 19:23:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.498 19:23:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.498 19:23:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.498 19:23:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.498 19:23:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.498 19:23:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.498 19:23:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.498 19:23:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.498 19:23:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.498 19:23:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.498 19:23:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.498 19:23:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.498 19:23:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.498 19:23:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.498 19:23:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.498 19:23:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.499 19:23:48 -- paths/export.sh@5 -- # export PATH 00:14:50.499 19:23:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.499 19:23:48 -- nvmf/common.sh@46 -- # : 0 00:14:50.499 19:23:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:50.499 19:23:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:50.499 19:23:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:50.499 19:23:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.499 19:23:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.499 19:23:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:50.499 19:23:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:50.499 19:23:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:50.499 19:23:48 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.499 19:23:48 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.499 19:23:48 -- target/nvme_cli.sh@14 -- # devs=() 00:14:50.499 19:23:48 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:50.499 19:23:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:50.499 19:23:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.499 19:23:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:50.499 19:23:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:50.499 19:23:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:50.499 19:23:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.499 19:23:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.499 19:23:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.499 19:23:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:50.499 19:23:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:50.499 19:23:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:50.499 19:23:48 -- common/autotest_common.sh@10 -- # set +x 00:14:52.458 19:23:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:52.458 19:23:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:52.458 19:23:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:52.458 19:23:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:52.458 19:23:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:52.458 19:23:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:52.458 19:23:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:52.458 19:23:50 -- nvmf/common.sh@294 -- # net_devs=() 00:14:52.458 19:23:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:52.458 19:23:50 -- nvmf/common.sh@295 -- # e810=() 00:14:52.458 19:23:50 -- nvmf/common.sh@295 -- # local -ga e810 00:14:52.458 19:23:50 -- nvmf/common.sh@296 -- # x722=() 00:14:52.458 19:23:50 -- nvmf/common.sh@296 -- # local -ga x722 00:14:52.458 19:23:50 -- nvmf/common.sh@297 -- # mlx=() 00:14:52.458 19:23:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:52.458 19:23:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.458 19:23:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:52.458 19:23:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:52.458 19:23:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:52.458 19:23:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:52.458 19:23:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:52.458 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:52.458 19:23:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:52.458 19:23:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:52.458 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:52.458 19:23:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:52.458 19:23:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:52.458 19:23:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.458 19:23:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:52.458 19:23:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.458 19:23:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:52.458 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:52.458 19:23:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.458 19:23:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:52.458 19:23:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.458 19:23:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:52.458 19:23:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.458 19:23:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:52.458 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:52.458 19:23:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.458 19:23:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:52.458 19:23:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:52.458 19:23:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:52.458 19:23:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:52.458 19:23:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.458 19:23:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.458 19:23:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.458 19:23:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:52.458 19:23:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.458 19:23:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.458 19:23:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:52.458 19:23:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.458 19:23:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.458 19:23:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:52.458 19:23:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:52.458 19:23:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.458 19:23:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.458 19:23:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.459 19:23:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.459 19:23:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:52.459 19:23:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.719 19:23:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.719 19:23:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.719 19:23:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:52.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:14:52.719 00:14:52.719 --- 10.0.0.2 ping statistics --- 00:14:52.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.719 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:14:52.719 19:23:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:14:52.719 00:14:52.719 --- 10.0.0.1 ping statistics --- 00:14:52.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.719 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:52.719 19:23:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.719 19:23:50 -- nvmf/common.sh@410 -- # return 0 00:14:52.719 19:23:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:52.719 19:23:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.719 19:23:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:52.719 19:23:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:52.719 19:23:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.719 19:23:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:52.719 19:23:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:52.719 19:23:50 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:52.719 19:23:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:52.719 19:23:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:52.719 19:23:50 -- common/autotest_common.sh@10 -- # set +x 00:14:52.719 19:23:50 -- nvmf/common.sh@469 -- # nvmfpid=1170700 00:14:52.719 19:23:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:52.719 19:23:50 -- nvmf/common.sh@470 -- # waitforlisten 1170700 00:14:52.719 19:23:50 -- common/autotest_common.sh@829 -- # '[' -z 1170700 ']' 00:14:52.719 19:23:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.719 19:23:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.719 19:23:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.719 19:23:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.719 19:23:50 -- common/autotest_common.sh@10 -- # set +x 00:14:52.719 [2024-11-17 19:23:50.819894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:52.719 [2024-11-17 19:23:50.819968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.719 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.719 [2024-11-17 19:23:50.896943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.979 [2024-11-17 19:23:50.995572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:52.979 [2024-11-17 19:23:50.995747] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.979 [2024-11-17 19:23:50.995770] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.979 [2024-11-17 19:23:50.995785] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.979 [2024-11-17 19:23:50.995845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.979 [2024-11-17 19:23:50.995878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.979 [2024-11-17 19:23:50.996001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.979 [2024-11-17 19:23:50.996004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.915 19:23:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.915 19:23:51 -- common/autotest_common.sh@862 -- # return 0 00:14:53.915 19:23:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:53.915 19:23:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.915 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.915 19:23:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.915 19:23:51 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.915 19:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.915 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.915 [2024-11-17 19:23:51.888572] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.915 19:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.915 19:23:51 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.915 19:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.915 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.915 Malloc0 00:14:53.915 19:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.915 19:23:51 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:53.915 19:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.915 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.915 Malloc1 00:14:53.915 19:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.915 19:23:51 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:53.915 19:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.915 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.915 19:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.915 19:23:51 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.915 19:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.915 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.915 19:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.915 19:23:51 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:53.915 19:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.915 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.915 19:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.915 19:23:51 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.915 19:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.915 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.915 [2024-11-17 19:23:51.969937] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.915 19:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.915 19:23:51 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:53.915 19:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.915 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.915 19:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.915 19:23:51 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:53.915 00:14:53.915 Discovery Log Number of Records 2, Generation counter 2 00:14:53.915 =====Discovery Log Entry 0====== 00:14:53.915 trtype: tcp 00:14:53.915 adrfam: ipv4 00:14:53.915 subtype: current discovery subsystem 00:14:53.915 treq: not required 00:14:53.915 portid: 0 00:14:53.915 trsvcid: 4420 00:14:53.915 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:53.915 traddr: 10.0.0.2 00:14:53.915 eflags: explicit discovery connections, duplicate discovery information 00:14:53.915 sectype: none 00:14:53.915 =====Discovery Log Entry 1====== 00:14:53.915 trtype: tcp 00:14:53.915 adrfam: ipv4 00:14:53.915 subtype: nvme subsystem 00:14:53.915 treq: not required 00:14:53.915 portid: 0 00:14:53.915 trsvcid: 4420 00:14:53.915 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:53.915 traddr: 10.0.0.2 00:14:53.915 eflags: none 00:14:53.915 sectype: none 00:14:53.915 19:23:52 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:53.915 19:23:52 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:53.915 19:23:52 -- nvmf/common.sh@510 -- # local dev _ 00:14:53.915 19:23:52 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.915 19:23:52 -- nvmf/common.sh@509 -- # nvme list 00:14:53.915 19:23:52 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:53.915 19:23:52 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.915 19:23:52 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.915 19:23:52 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.915 19:23:52 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:53.915 19:23:52 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:54.852 19:23:52 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:54.852 19:23:52 -- common/autotest_common.sh@1187 -- # local i=0 00:14:54.852 19:23:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.852 19:23:52 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:14:54.852 19:23:52 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:14:54.852 19:23:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:56.749 19:23:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:56.749 19:23:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:56.749 19:23:54 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.749 19:23:54 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:14:56.749 19:23:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.749 19:23:54 -- common/autotest_common.sh@1197 -- # return 0 00:14:56.749 19:23:54 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:56.749 19:23:54 -- nvmf/common.sh@510 -- # local dev _ 00:14:56.749 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.749 19:23:54 -- nvmf/common.sh@509 -- # nvme list 00:14:56.749 19:23:54 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:56.749 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.749 19:23:54 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:56.749 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.749 19:23:54 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:56.749 19:23:54 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:56.749 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.749 19:23:54 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:56.749 19:23:54 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:56.749 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.749 19:23:54 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:56.749 /dev/nvme0n2 ]] 00:14:56.749 19:23:54 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:56.749 19:23:54 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:56.749 19:23:54 -- nvmf/common.sh@510 -- # local dev _ 00:14:56.749 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.750 19:23:54 -- nvmf/common.sh@509 -- # nvme list 00:14:56.750 19:23:54 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:56.750 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.750 19:23:54 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:56.750 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.750 19:23:54 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:56.750 19:23:54 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:56.750 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.750 19:23:54 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:56.750 19:23:54 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:56.750 19:23:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:56.750 19:23:54 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:56.750 19:23:54 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.750 19:23:54 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:56.750 19:23:54 -- common/autotest_common.sh@1208 -- # local i=0 00:14:56.750 19:23:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:56.750 19:23:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.750 19:23:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:56.750 19:23:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.750 19:23:54 -- common/autotest_common.sh@1220 -- # return 0 00:14:56.750 19:23:54 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:56.750 19:23:54 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.750 19:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.750 19:23:54 -- common/autotest_common.sh@10 -- # set +x 00:14:56.750 19:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.750 19:23:54 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:56.750 19:23:54 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:56.750 19:23:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:56.750 19:23:54 -- nvmf/common.sh@116 -- # sync 00:14:56.750 19:23:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:56.750 19:23:54 -- nvmf/common.sh@119 -- # set +e 00:14:56.750 19:23:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:56.750 19:23:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:56.750 rmmod nvme_tcp 00:14:56.750 rmmod nvme_fabrics 00:14:56.750 rmmod nvme_keyring 00:14:57.009 19:23:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:57.009 19:23:55 -- nvmf/common.sh@123 -- # set -e 00:14:57.009 19:23:55 -- nvmf/common.sh@124 -- # return 0 00:14:57.009 19:23:55 -- nvmf/common.sh@477 -- # '[' -n 1170700 ']' 00:14:57.009 19:23:55 -- nvmf/common.sh@478 -- # killprocess 1170700 00:14:57.009 19:23:55 -- common/autotest_common.sh@936 -- # '[' -z 1170700 ']' 00:14:57.009 19:23:55 -- common/autotest_common.sh@940 -- # kill -0 1170700 00:14:57.009 19:23:55 -- common/autotest_common.sh@941 -- # uname 00:14:57.009 19:23:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.009 19:23:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1170700 00:14:57.009 19:23:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:57.009 19:23:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:57.009 19:23:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1170700' 00:14:57.009 killing process with pid 1170700 00:14:57.009 19:23:55 -- common/autotest_common.sh@955 -- # kill 1170700 00:14:57.009 19:23:55 -- common/autotest_common.sh@960 -- # wait 1170700 00:14:57.268 19:23:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:57.268 19:23:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:57.268 19:23:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:57.268 19:23:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.268 19:23:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:57.268 19:23:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.268 19:23:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.268 19:23:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.170 19:23:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:59.170 00:14:59.170 real 0m8.905s 00:14:59.170 user 0m18.217s 00:14:59.170 sys 0m2.200s 00:14:59.170 19:23:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:59.170 19:23:57 -- common/autotest_common.sh@10 -- # set +x 00:14:59.170 ************************************ 00:14:59.170 END TEST nvmf_nvme_cli 00:14:59.170 ************************************ 00:14:59.170 19:23:57 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:14:59.170 19:23:57 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:59.170 19:23:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:59.170 19:23:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.170 19:23:57 -- common/autotest_common.sh@10 -- # set +x 00:14:59.428 ************************************ 00:14:59.428 START TEST nvmf_vfio_user 00:14:59.428 ************************************ 00:14:59.428 19:23:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:59.428 * Looking for test storage... 00:14:59.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.428 19:23:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:59.428 19:23:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:59.428 19:23:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:59.428 19:23:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:59.428 19:23:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:59.428 19:23:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:59.428 19:23:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:59.428 19:23:57 -- scripts/common.sh@335 -- # IFS=.-: 00:14:59.428 19:23:57 -- scripts/common.sh@335 -- # read -ra ver1 00:14:59.428 19:23:57 -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.428 19:23:57 -- scripts/common.sh@336 -- # read -ra ver2 00:14:59.428 19:23:57 -- scripts/common.sh@337 -- # local 'op=<' 00:14:59.428 19:23:57 -- scripts/common.sh@339 -- # ver1_l=2 00:14:59.428 19:23:57 -- scripts/common.sh@340 -- # ver2_l=1 00:14:59.428 19:23:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:59.428 19:23:57 -- scripts/common.sh@343 -- # case "$op" in 00:14:59.428 19:23:57 -- scripts/common.sh@344 -- # : 1 00:14:59.428 19:23:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:59.429 19:23:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.429 19:23:57 -- scripts/common.sh@364 -- # decimal 1 00:14:59.429 19:23:57 -- scripts/common.sh@352 -- # local d=1 00:14:59.429 19:23:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.429 19:23:57 -- scripts/common.sh@354 -- # echo 1 00:14:59.429 19:23:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:59.429 19:23:57 -- scripts/common.sh@365 -- # decimal 2 00:14:59.429 19:23:57 -- scripts/common.sh@352 -- # local d=2 00:14:59.429 19:23:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.429 19:23:57 -- scripts/common.sh@354 -- # echo 2 00:14:59.429 19:23:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:59.429 19:23:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:59.429 19:23:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:59.429 19:23:57 -- scripts/common.sh@367 -- # return 0 00:14:59.429 19:23:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.429 19:23:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:59.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.429 --rc genhtml_branch_coverage=1 00:14:59.429 --rc genhtml_function_coverage=1 00:14:59.429 --rc genhtml_legend=1 00:14:59.429 --rc geninfo_all_blocks=1 00:14:59.429 --rc geninfo_unexecuted_blocks=1 00:14:59.429 00:14:59.429 ' 00:14:59.429 19:23:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:59.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.429 --rc genhtml_branch_coverage=1 00:14:59.429 --rc genhtml_function_coverage=1 00:14:59.429 --rc genhtml_legend=1 00:14:59.429 --rc geninfo_all_blocks=1 00:14:59.429 --rc geninfo_unexecuted_blocks=1 00:14:59.429 00:14:59.429 ' 00:14:59.429 19:23:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:59.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.429 --rc genhtml_branch_coverage=1 00:14:59.429 --rc genhtml_function_coverage=1 00:14:59.429 --rc genhtml_legend=1 00:14:59.429 --rc geninfo_all_blocks=1 00:14:59.429 --rc geninfo_unexecuted_blocks=1 00:14:59.429 00:14:59.429 ' 00:14:59.429 19:23:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:59.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.429 --rc genhtml_branch_coverage=1 00:14:59.429 --rc genhtml_function_coverage=1 00:14:59.429 --rc genhtml_legend=1 00:14:59.429 --rc geninfo_all_blocks=1 00:14:59.429 --rc geninfo_unexecuted_blocks=1 00:14:59.429 00:14:59.429 ' 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.429 19:23:57 -- nvmf/common.sh@7 -- # uname -s 00:14:59.429 19:23:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.429 19:23:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.429 19:23:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.429 19:23:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.429 19:23:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.429 19:23:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.429 19:23:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.429 19:23:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.429 19:23:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.429 19:23:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.429 19:23:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.429 19:23:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.429 19:23:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.429 19:23:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.429 19:23:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.429 19:23:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.429 19:23:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.429 19:23:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.429 19:23:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.429 19:23:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.429 19:23:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.429 19:23:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.429 19:23:57 -- paths/export.sh@5 -- # export PATH 00:14:59.429 19:23:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.429 19:23:57 -- nvmf/common.sh@46 -- # : 0 00:14:59.429 19:23:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:59.429 19:23:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:59.429 19:23:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:59.429 19:23:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.429 19:23:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.429 19:23:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:59.429 19:23:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:59.429 19:23:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1171658 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1171658' 00:14:59.429 Process pid: 1171658 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:59.429 19:23:57 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1171658 00:14:59.429 19:23:57 -- common/autotest_common.sh@829 -- # '[' -z 1171658 ']' 00:14:59.429 19:23:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.429 19:23:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.429 19:23:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.429 19:23:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.429 19:23:57 -- common/autotest_common.sh@10 -- # set +x 00:14:59.429 [2024-11-17 19:23:57.635505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:59.429 [2024-11-17 19:23:57.635575] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.429 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.688 [2024-11-17 19:23:57.698957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.688 [2024-11-17 19:23:57.784667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:59.688 [2024-11-17 19:23:57.784810] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.688 [2024-11-17 19:23:57.784827] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.688 [2024-11-17 19:23:57.784848] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.688 [2024-11-17 19:23:57.784901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.688 [2024-11-17 19:23:57.784961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.688 [2024-11-17 19:23:57.785030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.688 [2024-11-17 19:23:57.785033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.624 19:23:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.624 19:23:58 -- common/autotest_common.sh@862 -- # return 0 00:15:00.624 19:23:58 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:01.557 19:23:59 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:01.815 19:23:59 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:01.815 19:23:59 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:01.815 19:23:59 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.815 19:23:59 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:01.815 19:23:59 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:02.072 Malloc1 00:15:02.072 19:24:00 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:02.330 19:24:00 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:02.587 19:24:00 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:02.844 19:24:01 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:02.844 19:24:01 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:02.844 19:24:01 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:03.101 Malloc2 00:15:03.101 19:24:01 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:03.359 19:24:01 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:03.927 19:24:01 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:03.927 19:24:02 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:03.927 19:24:02 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:03.927 19:24:02 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:03.927 19:24:02 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:03.927 19:24:02 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:03.927 19:24:02 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:03.927 [2024-11-17 19:24:02.175197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:03.927 [2024-11-17 19:24:02.175232] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172341 ] 00:15:03.927 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.188 [2024-11-17 19:24:02.207036] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:04.188 [2024-11-17 19:24:02.216080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:04.188 [2024-11-17 19:24:02.216114] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0c81540000 00:15:04.188 [2024-11-17 19:24:02.217077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.188 [2024-11-17 19:24:02.218069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.188 [2024-11-17 19:24:02.219071] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.188 [2024-11-17 19:24:02.220081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:04.188 [2024-11-17 19:24:02.221084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:04.188 [2024-11-17 19:24:02.222089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.188 [2024-11-17 19:24:02.223096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:04.188 [2024-11-17 19:24:02.224104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.188 [2024-11-17 19:24:02.225115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:04.188 [2024-11-17 19:24:02.225135] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0c8023a000 00:15:04.188 [2024-11-17 19:24:02.226259] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:04.188 [2024-11-17 19:24:02.241272] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:04.189 [2024-11-17 19:24:02.241307] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:04.189 [2024-11-17 19:24:02.246249] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:04.189 [2024-11-17 19:24:02.246300] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:04.189 [2024-11-17 19:24:02.246385] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:04.189 [2024-11-17 19:24:02.246415] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:04.189 [2024-11-17 19:24:02.246426] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:04.189 [2024-11-17 19:24:02.247244] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:04.189 [2024-11-17 19:24:02.247264] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:04.189 [2024-11-17 19:24:02.247277] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:04.189 [2024-11-17 19:24:02.248249] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:04.189 [2024-11-17 19:24:02.248266] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:04.189 [2024-11-17 19:24:02.248279] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:04.189 [2024-11-17 19:24:02.249257] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:04.189 [2024-11-17 19:24:02.249279] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:04.189 [2024-11-17 19:24:02.250262] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:04.189 [2024-11-17 19:24:02.250281] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:04.189 [2024-11-17 19:24:02.250290] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:04.189 [2024-11-17 19:24:02.250301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:04.189 [2024-11-17 19:24:02.250410] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:04.189 [2024-11-17 19:24:02.250419] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:04.189 [2024-11-17 19:24:02.250427] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:04.189 [2024-11-17 19:24:02.251266] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:04.189 [2024-11-17 19:24:02.252271] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:04.189 [2024-11-17 19:24:02.253279] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:04.189 [2024-11-17 19:24:02.254315] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:04.189 [2024-11-17 19:24:02.255291] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:04.189 [2024-11-17 19:24:02.255309] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:04.189 [2024-11-17 19:24:02.255318] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255342] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:04.189 [2024-11-17 19:24:02.255360] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255382] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:04.189 [2024-11-17 19:24:02.255392] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.189 [2024-11-17 19:24:02.255410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.189 [2024-11-17 19:24:02.255467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:04.189 [2024-11-17 19:24:02.255482] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:04.189 [2024-11-17 19:24:02.255490] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:04.189 [2024-11-17 19:24:02.255497] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:04.189 [2024-11-17 19:24:02.255508] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:04.189 [2024-11-17 19:24:02.255518] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:04.189 [2024-11-17 19:24:02.255524] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:04.189 [2024-11-17 19:24:02.255532] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255547] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:04.189 [2024-11-17 19:24:02.255577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:04.189 [2024-11-17 19:24:02.255597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.189 [2024-11-17 19:24:02.255611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.189 [2024-11-17 19:24:02.255622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.189 [2024-11-17 19:24:02.255633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.189 [2024-11-17 19:24:02.255641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255679] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:04.189 [2024-11-17 19:24:02.255710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:04.189 [2024-11-17 19:24:02.255721] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:04.189 [2024-11-17 19:24:02.255729] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255740] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255753] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:04.189 [2024-11-17 19:24:02.255779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:04.189 [2024-11-17 19:24:02.255841] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255855] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255868] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:04.189 [2024-11-17 19:24:02.255876] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:04.189 [2024-11-17 19:24:02.255889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:04.189 [2024-11-17 19:24:02.255907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:04.189 [2024-11-17 19:24:02.255926] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:04.189 [2024-11-17 19:24:02.255941] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255970] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.255982] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:04.189 [2024-11-17 19:24:02.255990] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.189 [2024-11-17 19:24:02.256000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.189 [2024-11-17 19:24:02.256024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:04.189 [2024-11-17 19:24:02.256044] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.256058] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.256070] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:04.189 [2024-11-17 19:24:02.256078] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.189 [2024-11-17 19:24:02.256087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.189 [2024-11-17 19:24:02.256098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:04.189 [2024-11-17 19:24:02.256111] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:04.189 [2024-11-17 19:24:02.256122] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:04.190 [2024-11-17 19:24:02.256135] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:04.190 [2024-11-17 19:24:02.256146] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:04.190 [2024-11-17 19:24:02.256154] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:04.190 [2024-11-17 19:24:02.256162] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:04.190 [2024-11-17 19:24:02.256170] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:04.190 [2024-11-17 19:24:02.256178] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:04.190 [2024-11-17 19:24:02.256201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:04.190 [2024-11-17 19:24:02.256219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:04.190 [2024-11-17 19:24:02.256238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:04.190 [2024-11-17 19:24:02.256254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:04.190 [2024-11-17 19:24:02.256270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:04.190 [2024-11-17 19:24:02.256290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:04.190 [2024-11-17 19:24:02.256305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:04.190 [2024-11-17 19:24:02.256319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:04.190 [2024-11-17 19:24:02.256336] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:04.190 [2024-11-17 19:24:02.256344] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:04.190 [2024-11-17 19:24:02.256350] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:04.190 [2024-11-17 19:24:02.256356] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:04.190 [2024-11-17 19:24:02.256365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:04.190 [2024-11-17 19:24:02.256376] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:04.190 [2024-11-17 19:24:02.256384] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:04.190 [2024-11-17 19:24:02.256392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:04.190 [2024-11-17 19:24:02.256403] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:04.190 [2024-11-17 19:24:02.256410] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.190 [2024-11-17 19:24:02.256419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.190 [2024-11-17 19:24:02.256430] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:04.190 [2024-11-17 19:24:02.256438] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:04.190 [2024-11-17 19:24:02.256446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:04.190 [2024-11-17 19:24:02.256457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:04.190 [2024-11-17 19:24:02.256477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:04.190 [2024-11-17 19:24:02.256492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:04.190 [2024-11-17 19:24:02.256504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:04.190 ===================================================== 00:15:04.190 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.190 ===================================================== 00:15:04.190 Controller Capabilities/Features 00:15:04.190 ================================ 00:15:04.190 Vendor ID: 4e58 00:15:04.190 Subsystem Vendor ID: 4e58 00:15:04.190 Serial Number: SPDK1 00:15:04.190 Model Number: SPDK bdev Controller 00:15:04.190 Firmware Version: 24.01.1 00:15:04.190 Recommended Arb Burst: 6 00:15:04.190 IEEE OUI Identifier: 8d 6b 50 00:15:04.190 Multi-path I/O 00:15:04.190 May have multiple subsystem ports: Yes 00:15:04.190 May have multiple controllers: Yes 00:15:04.190 Associated with SR-IOV VF: No 00:15:04.190 Max Data Transfer Size: 131072 00:15:04.190 Max Number of Namespaces: 32 00:15:04.190 Max Number of I/O Queues: 127 00:15:04.190 NVMe Specification Version (VS): 1.3 00:15:04.190 NVMe Specification Version (Identify): 1.3 00:15:04.190 Maximum Queue Entries: 256 00:15:04.190 Contiguous Queues Required: Yes 00:15:04.190 Arbitration Mechanisms Supported 00:15:04.190 Weighted Round Robin: Not Supported 00:15:04.190 Vendor Specific: Not Supported 00:15:04.190 Reset Timeout: 15000 ms 00:15:04.190 Doorbell Stride: 4 bytes 00:15:04.190 NVM Subsystem Reset: Not Supported 00:15:04.190 Command Sets Supported 00:15:04.190 NVM Command Set: Supported 00:15:04.190 Boot Partition: Not Supported 00:15:04.190 Memory Page Size Minimum: 4096 bytes 00:15:04.190 Memory Page Size Maximum: 4096 bytes 00:15:04.190 Persistent Memory Region: Not Supported 00:15:04.190 Optional Asynchronous Events Supported 00:15:04.190 Namespace Attribute Notices: Supported 00:15:04.190 Firmware Activation Notices: Not Supported 00:15:04.190 ANA Change Notices: Not Supported 00:15:04.190 PLE Aggregate Log Change Notices: Not Supported 00:15:04.190 LBA Status Info Alert Notices: Not Supported 00:15:04.190 EGE Aggregate Log Change Notices: Not Supported 00:15:04.190 Normal NVM Subsystem Shutdown event: Not Supported 00:15:04.190 Zone Descriptor Change Notices: Not Supported 00:15:04.190 Discovery Log Change Notices: Not Supported 00:15:04.190 Controller Attributes 00:15:04.190 128-bit Host Identifier: Supported 00:15:04.190 Non-Operational Permissive Mode: Not Supported 00:15:04.190 NVM Sets: Not Supported 00:15:04.190 Read Recovery Levels: Not Supported 00:15:04.190 Endurance Groups: Not Supported 00:15:04.190 Predictable Latency Mode: Not Supported 00:15:04.190 Traffic Based Keep ALive: Not Supported 00:15:04.190 Namespace Granularity: Not Supported 00:15:04.190 SQ Associations: Not Supported 00:15:04.190 UUID List: Not Supported 00:15:04.190 Multi-Domain Subsystem: Not Supported 00:15:04.190 Fixed Capacity Management: Not Supported 00:15:04.190 Variable Capacity Management: Not Supported 00:15:04.190 Delete Endurance Group: Not Supported 00:15:04.190 Delete NVM Set: Not Supported 00:15:04.190 Extended LBA Formats Supported: Not Supported 00:15:04.190 Flexible Data Placement Supported: Not Supported 00:15:04.190 00:15:04.190 Controller Memory Buffer Support 00:15:04.190 ================================ 00:15:04.190 Supported: No 00:15:04.190 00:15:04.190 Persistent Memory Region Support 00:15:04.190 ================================ 00:15:04.190 Supported: No 00:15:04.190 00:15:04.190 Admin Command Set Attributes 00:15:04.190 ============================ 00:15:04.190 Security Send/Receive: Not Supported 00:15:04.190 Format NVM: Not Supported 00:15:04.190 Firmware Activate/Download: Not Supported 00:15:04.190 Namespace Management: Not Supported 00:15:04.190 Device Self-Test: Not Supported 00:15:04.190 Directives: Not Supported 00:15:04.190 NVMe-MI: Not Supported 00:15:04.190 Virtualization Management: Not Supported 00:15:04.190 Doorbell Buffer Config: Not Supported 00:15:04.190 Get LBA Status Capability: Not Supported 00:15:04.190 Command & Feature Lockdown Capability: Not Supported 00:15:04.190 Abort Command Limit: 4 00:15:04.190 Async Event Request Limit: 4 00:15:04.190 Number of Firmware Slots: N/A 00:15:04.190 Firmware Slot 1 Read-Only: N/A 00:15:04.190 Firmware Activation Without Reset: N/A 00:15:04.190 Multiple Update Detection Support: N/A 00:15:04.190 Firmware Update Granularity: No Information Provided 00:15:04.190 Per-Namespace SMART Log: No 00:15:04.190 Asymmetric Namespace Access Log Page: Not Supported 00:15:04.190 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:04.190 Command Effects Log Page: Supported 00:15:04.190 Get Log Page Extended Data: Supported 00:15:04.190 Telemetry Log Pages: Not Supported 00:15:04.190 Persistent Event Log Pages: Not Supported 00:15:04.190 Supported Log Pages Log Page: May Support 00:15:04.190 Commands Supported & Effects Log Page: Not Supported 00:15:04.190 Feature Identifiers & Effects Log Page:May Support 00:15:04.190 NVMe-MI Commands & Effects Log Page: May Support 00:15:04.190 Data Area 4 for Telemetry Log: Not Supported 00:15:04.190 Error Log Page Entries Supported: 128 00:15:04.190 Keep Alive: Supported 00:15:04.190 Keep Alive Granularity: 10000 ms 00:15:04.190 00:15:04.190 NVM Command Set Attributes 00:15:04.190 ========================== 00:15:04.190 Submission Queue Entry Size 00:15:04.190 Max: 64 00:15:04.190 Min: 64 00:15:04.190 Completion Queue Entry Size 00:15:04.190 Max: 16 00:15:04.190 Min: 16 00:15:04.190 Number of Namespaces: 32 00:15:04.190 Compare Command: Supported 00:15:04.190 Write Uncorrectable Command: Not Supported 00:15:04.191 Dataset Management Command: Supported 00:15:04.191 Write Zeroes Command: Supported 00:15:04.191 Set Features Save Field: Not Supported 00:15:04.191 Reservations: Not Supported 00:15:04.191 Timestamp: Not Supported 00:15:04.191 Copy: Supported 00:15:04.191 Volatile Write Cache: Present 00:15:04.191 Atomic Write Unit (Normal): 1 00:15:04.191 Atomic Write Unit (PFail): 1 00:15:04.191 Atomic Compare & Write Unit: 1 00:15:04.191 Fused Compare & Write: Supported 00:15:04.191 Scatter-Gather List 00:15:04.191 SGL Command Set: Supported (Dword aligned) 00:15:04.191 SGL Keyed: Not Supported 00:15:04.191 SGL Bit Bucket Descriptor: Not Supported 00:15:04.191 SGL Metadata Pointer: Not Supported 00:15:04.191 Oversized SGL: Not Supported 00:15:04.191 SGL Metadata Address: Not Supported 00:15:04.191 SGL Offset: Not Supported 00:15:04.191 Transport SGL Data Block: Not Supported 00:15:04.191 Replay Protected Memory Block: Not Supported 00:15:04.191 00:15:04.191 Firmware Slot Information 00:15:04.191 ========================= 00:15:04.191 Active slot: 1 00:15:04.191 Slot 1 Firmware Revision: 24.01.1 00:15:04.191 00:15:04.191 00:15:04.191 Commands Supported and Effects 00:15:04.191 ============================== 00:15:04.191 Admin Commands 00:15:04.191 -------------- 00:15:04.191 Get Log Page (02h): Supported 00:15:04.191 Identify (06h): Supported 00:15:04.191 Abort (08h): Supported 00:15:04.191 Set Features (09h): Supported 00:15:04.191 Get Features (0Ah): Supported 00:15:04.191 Asynchronous Event Request (0Ch): Supported 00:15:04.191 Keep Alive (18h): Supported 00:15:04.191 I/O Commands 00:15:04.191 ------------ 00:15:04.191 Flush (00h): Supported LBA-Change 00:15:04.191 Write (01h): Supported LBA-Change 00:15:04.191 Read (02h): Supported 00:15:04.191 Compare (05h): Supported 00:15:04.191 Write Zeroes (08h): Supported LBA-Change 00:15:04.191 Dataset Management (09h): Supported LBA-Change 00:15:04.191 Copy (19h): Supported LBA-Change 00:15:04.191 Unknown (79h): Supported LBA-Change 00:15:04.191 Unknown (7Ah): Supported 00:15:04.191 00:15:04.191 Error Log 00:15:04.191 ========= 00:15:04.191 00:15:04.191 Arbitration 00:15:04.191 =========== 00:15:04.191 Arbitration Burst: 1 00:15:04.191 00:15:04.191 Power Management 00:15:04.191 ================ 00:15:04.191 Number of Power States: 1 00:15:04.191 Current Power State: Power State #0 00:15:04.191 Power State #0: 00:15:04.191 Max Power: 0.00 W 00:15:04.191 Non-Operational State: Operational 00:15:04.191 Entry Latency: Not Reported 00:15:04.191 Exit Latency: Not Reported 00:15:04.191 Relative Read Throughput: 0 00:15:04.191 Relative Read Latency: 0 00:15:04.191 Relative Write Throughput: 0 00:15:04.191 Relative Write Latency: 0 00:15:04.191 Idle Power: Not Reported 00:15:04.191 Active Power: Not Reported 00:15:04.191 Non-Operational Permissive Mode: Not Supported 00:15:04.191 00:15:04.191 Health Information 00:15:04.191 ================== 00:15:04.191 Critical Warnings: 00:15:04.191 Available Spare Space: OK 00:15:04.191 Temperature: OK 00:15:04.191 Device Reliability: OK 00:15:04.191 Read Only: No 00:15:04.191 Volatile Memory Backup: OK 00:15:04.191 Current Temperature: 0 Kelvin[2024-11-17 19:24:02.256625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:04.191 [2024-11-17 19:24:02.256641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:04.191 [2024-11-17 19:24:02.256700] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:04.191 [2024-11-17 19:24:02.256720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.191 [2024-11-17 19:24:02.256735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.191 [2024-11-17 19:24:02.256746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.191 [2024-11-17 19:24:02.256755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.191 [2024-11-17 19:24:02.260697] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:04.191 [2024-11-17 19:24:02.260720] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:04.191 [2024-11-17 19:24:02.261369] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:04.191 [2024-11-17 19:24:02.261382] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:04.191 [2024-11-17 19:24:02.262334] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:04.191 [2024-11-17 19:24:02.262357] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:04.191 [2024-11-17 19:24:02.262409] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:04.191 [2024-11-17 19:24:02.264382] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:04.191 (-273 Celsius) 00:15:04.191 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:04.191 Available Spare: 0% 00:15:04.191 Available Spare Threshold: 0% 00:15:04.191 Life Percentage Used: 0% 00:15:04.191 Data Units Read: 0 00:15:04.191 Data Units Written: 0 00:15:04.191 Host Read Commands: 0 00:15:04.191 Host Write Commands: 0 00:15:04.191 Controller Busy Time: 0 minutes 00:15:04.191 Power Cycles: 0 00:15:04.191 Power On Hours: 0 hours 00:15:04.191 Unsafe Shutdowns: 0 00:15:04.191 Unrecoverable Media Errors: 0 00:15:04.191 Lifetime Error Log Entries: 0 00:15:04.191 Warning Temperature Time: 0 minutes 00:15:04.191 Critical Temperature Time: 0 minutes 00:15:04.191 00:15:04.191 Number of Queues 00:15:04.191 ================ 00:15:04.191 Number of I/O Submission Queues: 127 00:15:04.191 Number of I/O Completion Queues: 127 00:15:04.191 00:15:04.191 Active Namespaces 00:15:04.191 ================= 00:15:04.191 Namespace ID:1 00:15:04.191 Error Recovery Timeout: Unlimited 00:15:04.191 Command Set Identifier: NVM (00h) 00:15:04.191 Deallocate: Supported 00:15:04.191 Deallocated/Unwritten Error: Not Supported 00:15:04.191 Deallocated Read Value: Unknown 00:15:04.191 Deallocate in Write Zeroes: Not Supported 00:15:04.191 Deallocated Guard Field: 0xFFFF 00:15:04.191 Flush: Supported 00:15:04.191 Reservation: Supported 00:15:04.191 Namespace Sharing Capabilities: Multiple Controllers 00:15:04.191 Size (in LBAs): 131072 (0GiB) 00:15:04.191 Capacity (in LBAs): 131072 (0GiB) 00:15:04.191 Utilization (in LBAs): 131072 (0GiB) 00:15:04.191 NGUID: 95ED9030A37E45C7BCC9B0860B819E8A 00:15:04.191 UUID: 95ed9030-a37e-45c7-bcc9-b0860b819e8a 00:15:04.191 Thin Provisioning: Not Supported 00:15:04.191 Per-NS Atomic Units: Yes 00:15:04.191 Atomic Boundary Size (Normal): 0 00:15:04.191 Atomic Boundary Size (PFail): 0 00:15:04.191 Atomic Boundary Offset: 0 00:15:04.191 Maximum Single Source Range Length: 65535 00:15:04.191 Maximum Copy Length: 65535 00:15:04.191 Maximum Source Range Count: 1 00:15:04.191 NGUID/EUI64 Never Reused: No 00:15:04.191 Namespace Write Protected: No 00:15:04.191 Number of LBA Formats: 1 00:15:04.191 Current LBA Format: LBA Format #00 00:15:04.191 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:04.191 00:15:04.191 19:24:02 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:04.191 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.467 Initializing NVMe Controllers 00:15:09.467 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:09.467 Initialization complete. Launching workers. 00:15:09.467 ======================================================== 00:15:09.467 Latency(us) 00:15:09.467 Device Information : IOPS MiB/s Average min max 00:15:09.467 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 38082.80 148.76 3361.26 1139.08 9807.05 00:15:09.467 ======================================================== 00:15:09.467 Total : 38082.80 148.76 3361.26 1139.08 9807.05 00:15:09.467 00:15:09.467 19:24:07 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:09.467 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.733 Initializing NVMe Controllers 00:15:14.733 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:14.733 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:14.733 Initialization complete. Launching workers. 00:15:14.733 ======================================================== 00:15:14.733 Latency(us) 00:15:14.733 Device Information : IOPS MiB/s Average min max 00:15:14.733 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.90 62.70 7979.86 4943.26 15133.20 00:15:14.733 ======================================================== 00:15:14.733 Total : 16050.90 62.70 7979.86 4943.26 15133.20 00:15:14.733 00:15:14.733 19:24:12 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:14.733 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.007 Initializing NVMe Controllers 00:15:20.007 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:20.007 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:20.007 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:20.007 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:20.007 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:20.007 Initialization complete. Launching workers. 00:15:20.007 Starting thread on core 2 00:15:20.007 Starting thread on core 3 00:15:20.007 Starting thread on core 1 00:15:20.007 19:24:18 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:20.007 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.296 Initializing NVMe Controllers 00:15:23.296 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.296 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:23.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:23.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:23.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:23.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:23.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:23.296 Initialization complete. Launching workers. 00:15:23.296 Starting thread on core 1 with urgent priority queue 00:15:23.296 Starting thread on core 2 with urgent priority queue 00:15:23.296 Starting thread on core 3 with urgent priority queue 00:15:23.296 Starting thread on core 0 with urgent priority queue 00:15:23.296 SPDK bdev Controller (SPDK1 ) core 0: 4373.67 IO/s 22.86 secs/100000 ios 00:15:23.296 SPDK bdev Controller (SPDK1 ) core 1: 4390.00 IO/s 22.78 secs/100000 ios 00:15:23.296 SPDK bdev Controller (SPDK1 ) core 2: 4084.00 IO/s 24.49 secs/100000 ios 00:15:23.296 SPDK bdev Controller (SPDK1 ) core 3: 4467.67 IO/s 22.38 secs/100000 ios 00:15:23.296 ======================================================== 00:15:23.296 00:15:23.296 19:24:21 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:23.296 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.555 Initializing NVMe Controllers 00:15:23.555 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.555 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.555 Namespace ID: 1 size: 0GB 00:15:23.555 Initialization complete. 00:15:23.555 INFO: using host memory buffer for IO 00:15:23.555 Hello world! 00:15:23.555 19:24:21 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:23.813 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.189 Initializing NVMe Controllers 00:15:25.189 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.189 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.189 Initialization complete. Launching workers. 00:15:25.189 submit (in ns) avg, min, max = 7345.4, 3460.0, 4003035.6 00:15:25.189 complete (in ns) avg, min, max = 25950.5, 2037.8, 4997374.4 00:15:25.189 00:15:25.189 Submit histogram 00:15:25.189 ================ 00:15:25.189 Range in us Cumulative Count 00:15:25.189 3.437 - 3.461: 0.0072% ( 1) 00:15:25.189 3.461 - 3.484: 0.4262% ( 58) 00:15:25.189 3.484 - 3.508: 1.8710% ( 200) 00:15:25.189 3.508 - 3.532: 5.7069% ( 531) 00:15:25.189 3.532 - 3.556: 11.7099% ( 831) 00:15:25.189 3.556 - 3.579: 22.2062% ( 1453) 00:15:25.189 3.579 - 3.603: 32.0740% ( 1366) 00:15:25.189 3.603 - 3.627: 41.0677% ( 1245) 00:15:25.189 3.627 - 3.650: 48.9417% ( 1090) 00:15:25.189 3.650 - 3.674: 54.8797% ( 822) 00:15:25.189 3.674 - 3.698: 59.1563% ( 592) 00:15:25.189 3.698 - 3.721: 63.0210% ( 535) 00:15:25.189 3.721 - 3.745: 66.1201% ( 429) 00:15:25.189 3.745 - 3.769: 69.5442% ( 474) 00:15:25.189 3.769 - 3.793: 73.1272% ( 496) 00:15:25.189 3.793 - 3.816: 77.0137% ( 538) 00:15:25.189 3.816 - 3.840: 81.0157% ( 554) 00:15:25.189 3.840 - 3.864: 84.6421% ( 502) 00:15:25.189 3.864 - 3.887: 87.2932% ( 367) 00:15:25.189 3.887 - 3.911: 88.9764% ( 233) 00:15:25.189 3.911 - 3.935: 90.5223% ( 214) 00:15:25.189 3.935 - 3.959: 91.7503% ( 170) 00:15:25.189 3.959 - 3.982: 92.7472% ( 138) 00:15:25.189 3.982 - 4.006: 93.6285% ( 122) 00:15:25.189 4.006 - 4.030: 94.4160% ( 109) 00:15:25.189 4.030 - 4.053: 95.1889% ( 107) 00:15:25.189 4.053 - 4.077: 95.7740% ( 81) 00:15:25.189 4.077 - 4.101: 96.2436% ( 65) 00:15:25.189 4.101 - 4.124: 96.4531% ( 29) 00:15:25.189 4.124 - 4.148: 96.6698% ( 30) 00:15:25.189 4.148 - 4.172: 96.8432% ( 24) 00:15:25.189 4.172 - 4.196: 96.9732% ( 18) 00:15:25.189 4.196 - 4.219: 97.0527% ( 11) 00:15:25.189 4.219 - 4.243: 97.1393% ( 12) 00:15:25.189 4.243 - 4.267: 97.2260% ( 12) 00:15:25.189 4.267 - 4.290: 97.3272% ( 14) 00:15:25.189 4.290 - 4.314: 97.4066% ( 11) 00:15:25.189 4.314 - 4.338: 97.4644% ( 8) 00:15:25.189 4.338 - 4.361: 97.5150% ( 7) 00:15:25.189 4.361 - 4.385: 97.5294% ( 2) 00:15:25.189 4.385 - 4.409: 97.5439% ( 2) 00:15:25.189 4.409 - 4.433: 97.5583% ( 2) 00:15:25.189 4.433 - 4.456: 97.5656% ( 1) 00:15:25.189 4.480 - 4.504: 97.5800% ( 2) 00:15:25.189 4.504 - 4.527: 97.5872% ( 1) 00:15:25.189 4.527 - 4.551: 97.5945% ( 1) 00:15:25.189 4.551 - 4.575: 97.6017% ( 1) 00:15:25.189 4.575 - 4.599: 97.6089% ( 1) 00:15:25.189 4.599 - 4.622: 97.6450% ( 5) 00:15:25.189 4.622 - 4.646: 97.6739% ( 4) 00:15:25.189 4.646 - 4.670: 97.7173% ( 6) 00:15:25.189 4.670 - 4.693: 97.7534% ( 5) 00:15:25.189 4.693 - 4.717: 97.7750% ( 3) 00:15:25.189 4.717 - 4.741: 97.8256% ( 7) 00:15:25.189 4.741 - 4.764: 97.8690% ( 6) 00:15:25.189 4.764 - 4.788: 97.9267% ( 8) 00:15:25.189 4.788 - 4.812: 97.9701% ( 6) 00:15:25.189 4.812 - 4.836: 98.0134% ( 6) 00:15:25.189 4.836 - 4.859: 98.0568% ( 6) 00:15:25.189 4.859 - 4.883: 98.0857% ( 4) 00:15:25.189 4.883 - 4.907: 98.1146% ( 4) 00:15:25.189 4.907 - 4.930: 98.1290% ( 2) 00:15:25.189 4.930 - 4.954: 98.1579% ( 4) 00:15:25.189 4.954 - 4.978: 98.2013% ( 6) 00:15:25.189 4.978 - 5.001: 98.2157% ( 2) 00:15:25.189 5.001 - 5.025: 98.2229% ( 1) 00:15:25.189 5.025 - 5.049: 98.2446% ( 3) 00:15:25.189 5.049 - 5.073: 98.2518% ( 1) 00:15:25.189 5.073 - 5.096: 98.2590% ( 1) 00:15:25.189 5.096 - 5.120: 98.3096% ( 7) 00:15:25.189 5.120 - 5.144: 98.3313% ( 3) 00:15:25.189 5.167 - 5.191: 98.3457% ( 2) 00:15:25.189 5.215 - 5.239: 98.3530% ( 1) 00:15:25.189 5.239 - 5.262: 98.3674% ( 2) 00:15:25.189 5.262 - 5.286: 98.3746% ( 1) 00:15:25.189 5.286 - 5.310: 98.3819% ( 1) 00:15:25.189 5.333 - 5.357: 98.3963% ( 2) 00:15:25.189 5.618 - 5.641: 98.4035% ( 1) 00:15:25.189 5.831 - 5.855: 98.4107% ( 1) 00:15:25.189 6.116 - 6.163: 98.4180% ( 1) 00:15:25.189 6.258 - 6.305: 98.4252% ( 1) 00:15:25.189 6.779 - 6.827: 98.4324% ( 1) 00:15:25.189 6.874 - 6.921: 98.4469% ( 2) 00:15:25.189 6.921 - 6.969: 98.4541% ( 1) 00:15:25.189 7.016 - 7.064: 98.4613% ( 1) 00:15:25.189 7.111 - 7.159: 98.4685% ( 1) 00:15:25.190 7.159 - 7.206: 98.4758% ( 1) 00:15:25.190 7.206 - 7.253: 98.4830% ( 1) 00:15:25.190 7.301 - 7.348: 98.4974% ( 2) 00:15:25.190 7.396 - 7.443: 98.5191% ( 3) 00:15:25.190 7.538 - 7.585: 98.5263% ( 1) 00:15:25.190 7.727 - 7.775: 98.5336% ( 1) 00:15:25.190 7.822 - 7.870: 98.5408% ( 1) 00:15:25.190 7.917 - 7.964: 98.5552% ( 2) 00:15:25.190 7.964 - 8.012: 98.5697% ( 2) 00:15:25.190 8.059 - 8.107: 98.5769% ( 1) 00:15:25.190 8.107 - 8.154: 98.5913% ( 2) 00:15:25.190 8.154 - 8.201: 98.6058% ( 2) 00:15:25.190 8.296 - 8.344: 98.6130% ( 1) 00:15:25.190 8.391 - 8.439: 98.6202% ( 1) 00:15:25.190 8.439 - 8.486: 98.6275% ( 1) 00:15:25.190 8.486 - 8.533: 98.6419% ( 2) 00:15:25.190 8.581 - 8.628: 98.6491% ( 1) 00:15:25.190 8.628 - 8.676: 98.6636% ( 2) 00:15:25.190 8.770 - 8.818: 98.6780% ( 2) 00:15:25.190 8.865 - 8.913: 98.6925% ( 2) 00:15:25.190 8.913 - 8.960: 98.7069% ( 2) 00:15:25.190 9.150 - 9.197: 98.7142% ( 1) 00:15:25.190 9.244 - 9.292: 98.7286% ( 2) 00:15:25.190 9.292 - 9.339: 98.7430% ( 2) 00:15:25.190 9.387 - 9.434: 98.7503% ( 1) 00:15:25.190 9.576 - 9.624: 98.7575% ( 1) 00:15:25.190 10.098 - 10.145: 98.7647% ( 1) 00:15:25.190 10.193 - 10.240: 98.7719% ( 1) 00:15:25.190 10.240 - 10.287: 98.7864% ( 2) 00:15:25.190 10.287 - 10.335: 98.7936% ( 1) 00:15:25.190 10.524 - 10.572: 98.8008% ( 1) 00:15:25.190 10.714 - 10.761: 98.8081% ( 1) 00:15:25.190 10.999 - 11.046: 98.8153% ( 1) 00:15:25.190 11.093 - 11.141: 98.8225% ( 1) 00:15:25.190 11.520 - 11.567: 98.8297% ( 1) 00:15:25.190 11.662 - 11.710: 98.8370% ( 1) 00:15:25.190 11.804 - 11.852: 98.8442% ( 1) 00:15:25.190 11.994 - 12.041: 98.8514% ( 1) 00:15:25.190 12.421 - 12.516: 98.8586% ( 1) 00:15:25.190 12.990 - 13.084: 98.8659% ( 1) 00:15:25.190 13.179 - 13.274: 98.8731% ( 1) 00:15:25.190 13.369 - 13.464: 98.8875% ( 2) 00:15:25.190 13.464 - 13.559: 98.8947% ( 1) 00:15:25.190 13.559 - 13.653: 98.9020% ( 1) 00:15:25.190 13.748 - 13.843: 98.9164% ( 2) 00:15:25.190 13.938 - 14.033: 98.9236% ( 1) 00:15:25.190 14.127 - 14.222: 98.9309% ( 1) 00:15:25.190 14.222 - 14.317: 98.9381% ( 1) 00:15:25.190 14.981 - 15.076: 98.9453% ( 1) 00:15:25.190 16.687 - 16.782: 98.9525% ( 1) 00:15:25.190 16.972 - 17.067: 98.9670% ( 2) 00:15:25.190 17.161 - 17.256: 98.9742% ( 1) 00:15:25.190 17.256 - 17.351: 99.0031% ( 4) 00:15:25.190 17.351 - 17.446: 99.0176% ( 2) 00:15:25.190 17.446 - 17.541: 99.0464% ( 4) 00:15:25.190 17.541 - 17.636: 99.0970% ( 7) 00:15:25.190 17.636 - 17.730: 99.1620% ( 9) 00:15:25.190 17.730 - 17.825: 99.1909% ( 4) 00:15:25.190 17.825 - 17.920: 99.2704% ( 11) 00:15:25.190 17.920 - 18.015: 99.3065% ( 5) 00:15:25.190 18.015 - 18.110: 99.3499% ( 6) 00:15:25.190 18.110 - 18.204: 99.4365% ( 12) 00:15:25.190 18.204 - 18.299: 99.5088% ( 10) 00:15:25.190 18.299 - 18.394: 99.5955% ( 12) 00:15:25.190 18.394 - 18.489: 99.6316% ( 5) 00:15:25.190 18.489 - 18.584: 99.6677% ( 5) 00:15:25.190 18.584 - 18.679: 99.6894% ( 3) 00:15:25.190 18.679 - 18.773: 99.7327% ( 6) 00:15:25.190 18.773 - 18.868: 99.7616% ( 4) 00:15:25.190 18.868 - 18.963: 99.7688% ( 1) 00:15:25.190 18.963 - 19.058: 99.7761% ( 1) 00:15:25.190 19.058 - 19.153: 99.7977% ( 3) 00:15:25.190 19.153 - 19.247: 99.8194% ( 3) 00:15:25.190 19.247 - 19.342: 99.8411% ( 3) 00:15:25.190 19.627 - 19.721: 99.8483% ( 1) 00:15:25.190 20.101 - 20.196: 99.8555% ( 1) 00:15:25.190 20.385 - 20.480: 99.8627% ( 1) 00:15:25.190 21.523 - 21.618: 99.8700% ( 1) 00:15:25.190 21.618 - 21.713: 99.8772% ( 1) 00:15:25.190 21.807 - 21.902: 99.8844% ( 1) 00:15:25.190 22.850 - 22.945: 99.8916% ( 1) 00:15:25.190 23.609 - 23.704: 99.8989% ( 1) 00:15:25.190 23.704 - 23.799: 99.9061% ( 1) 00:15:25.190 28.444 - 28.634: 99.9133% ( 1) 00:15:25.190 3980.705 - 4004.978: 100.0000% ( 12) 00:15:25.190 00:15:25.190 Complete histogram 00:15:25.190 ================== 00:15:25.190 Range in us Cumulative Count 00:15:25.190 2.027 - 2.039: 0.0144% ( 2) 00:15:25.190 2.039 - 2.050: 6.9349% ( 958) 00:15:25.190 2.050 - 2.062: 16.3043% ( 1297) 00:15:25.190 2.062 - 2.074: 18.1897% ( 261) 00:15:25.190 2.074 - 2.086: 46.7745% ( 3957) 00:15:25.190 2.086 - 2.098: 62.6237% ( 2194) 00:15:25.190 2.098 - 2.110: 64.5669% ( 269) 00:15:25.190 2.110 - 2.121: 68.2655% ( 512) 00:15:25.190 2.121 - 2.133: 69.5225% ( 174) 00:15:25.190 2.133 - 2.145: 71.9497% ( 336) 00:15:25.190 2.145 - 2.157: 81.4780% ( 1319) 00:15:25.190 2.157 - 2.169: 84.2086% ( 378) 00:15:25.190 2.169 - 2.181: 85.4006% ( 165) 00:15:25.190 2.181 - 2.193: 87.4810% ( 288) 00:15:25.190 2.193 - 2.204: 88.1529% ( 93) 00:15:25.190 2.204 - 2.216: 89.4893% ( 185) 00:15:25.190 2.216 - 2.228: 93.5852% ( 567) 00:15:25.190 2.228 - 2.240: 94.5243% ( 130) 00:15:25.190 2.240 - 2.252: 94.9650% ( 61) 00:15:25.190 2.252 - 2.264: 95.4128% ( 62) 00:15:25.190 2.264 - 2.276: 95.5140% ( 14) 00:15:25.190 2.276 - 2.287: 95.8029% ( 40) 00:15:25.190 2.287 - 2.299: 96.0558% ( 35) 00:15:25.190 2.299 - 2.311: 96.1641% ( 15) 00:15:25.190 2.311 - 2.323: 96.3014% ( 19) 00:15:25.190 2.323 - 2.335: 96.6265% ( 45) 00:15:25.190 2.335 - 2.347: 96.8071% ( 25) 00:15:25.190 2.347 - 2.359: 97.1105% ( 42) 00:15:25.190 2.359 - 2.370: 97.5294% ( 58) 00:15:25.190 2.370 - 2.382: 97.7173% ( 26) 00:15:25.190 2.382 - 2.394: 97.8617% ( 20) 00:15:25.190 2.394 - 2.406: 97.9701% ( 15) 00:15:25.190 2.406 - 2.418: 98.0712% ( 14) 00:15:25.190 2.418 - 2.430: 98.2085% ( 19) 00:15:25.190 2.430 - 2.441: 98.3313% ( 17) 00:15:25.190 2.441 - 2.453: 98.4035% ( 10) 00:15:25.190 2.453 - 2.465: 98.4541% ( 7) 00:15:25.190 2.465 - 2.477: 98.4758% ( 3) 00:15:25.190 2.477 - 2.489: 98.4902% ( 2) 00:15:25.190 2.489 - 2.501: 98.4974% ( 1) 00:15:25.190 2.501 - 2.513: 98.5191% ( 3) 00:15:25.190 2.513 - 2.524: 98.5336% ( 2) 00:15:25.190 2.524 - 2.536: 98.5552% ( 3) 00:15:25.190 2.536 - 2.548: 98.5625% ( 1) 00:15:25.190 2.548 - 2.560: 98.5697% ( 1) 00:15:25.190 2.607 - 2.619: 98.5769% ( 1) 00:15:25.190 2.619 - 2.631: 98.5841% ( 1) 00:15:25.190 2.655 - 2.667: 98.5913% ( 1) 00:15:25.190 2.868 - 2.880: 98.5986% ( 1) 00:15:25.190 3.153 - 3.176: 98.6130% ( 2) 00:15:25.190 3.176 - 3.200: 98.6202% ( 1) 00:15:25.190 3.200 - 3.224: 98.6275% ( 1) 00:15:25.190 3.224 - 3.247: 98.6491% ( 3) 00:15:25.190 3.271 - 3.295: 98.6636% ( 2) 00:15:25.190 3.295 - 3.319: 98.6780% ( 2) 00:15:25.190 3.342 - 3.366: 98.6997% ( 3) 00:15:25.190 3.366 - 3.390: 98.7069% ( 1) 00:15:25.190 3.390 - 3.413: 98.7358% ( 4) 00:15:25.190 3.437 - 3.461: 98.7503% ( 2) 00:15:25.190 3.461 - 3.484: 98.7575% ( 1) 00:15:25.190 3.508 - 3.532: 98.7647% ( 1) 00:15:25.190 3.579 - 3.603: 98.7792% ( 2) 00:15:25.190 3.674 - 3.698: 98.7864% ( 1) 00:15:25.190 3.721 - 3.745: 98.7936% ( 1) 00:15:25.190 3.793 - 3.816: 98.8008% ( 1) 00:15:25.190 4.764 - 4.788: 98.8081% ( 1) 00:15:25.190 6.068 - 6.116: 98.8153% ( 1) 00:15:25.190 6.116 - 6.163: 98.8225% ( 1) 00:15:25.190 6.258 - 6.305: 98.8370% ( 2) 00:15:25.190 6.353 - 6.400: 98.8442% ( 1) 00:15:25.190 6.542 - 6.590: 98.8514% ( 1) 00:15:25.190 6.779 - 6.827: 98.8586% ( 1) 00:15:25.190 7.111 - 7.159: 98.8659% ( 1) 00:15:25.190 7.206 - 7.253: 98.8731% ( 1) 00:15:25.190 8.770 - 8.818: 98.8803% ( 1) 00:15:25.190 9.292 - 9.339: 98.8875% ( 1) 00:15:25.190 11.662 - 11.710: 98.8947% ( 1) 00:15:25.190 15.265 - 15.360: 98.9020% ( 1) 00:15:25.190 15.360 - 15.455: 98.9092% ( 1) 00:15:25.190 15.455 - 15.550: 98.9164% ( 1) 00:15:25.190 15.550 - 15.644: 98.9236% ( 1) 00:15:25.190 15.644 - 15.739: 98.9309% ( 1) 00:15:25.190 15.739 - 15.834: 98.9453% ( 2) 00:15:25.190 15.834 - 15.929: 98.9670% ( 3) 00:15:25.190 15.929 - 16.024: 98.9959% ( 4) 00:15:25.190 16.024 - 16.119: 99.0609% ( 9) 00:15:25.190 16.119 - 16.213: 99.0826% ( 3) 00:15:25.190 16.213 - 16.308: 99.1187% ( 5) 00:15:25.190 16.308 - 16.403: 99.1404% ( 3) 00:15:25.190 16.403 - 16.498: 99.1476% ( 1) 00:15:25.190 16.498 - 16.593: 99.2198% ( 10) 00:15:25.190 16.593 - 16.687: 99.2632% ( 6) 00:15:25.190 16.687 - 16.782: 99.3065% ( 6) 00:15:25.190 16.782 - 16.877: 99.3426% ( 5) 00:15:25.190 16.877 - 16.972: 99.3499% ( 1) 00:15:25.190 17.067 - 17.161: 99.3571% ( 1) 00:15:25.190 17.161 - 17.256: 99.3643% ( 1) 00:15:25.191 17.256 - 17.351: 99.3787% ( 2) 00:15:25.191 17.351 - 17.446: 99.3860% ( 1) 00:15:25.191 18.110 - 18.204: 99.3932% ( 1) 00:15:25.191 19.247 - 19.342: 99.4004% ( 1) 00:15:25.191 19.816 - 19.911: 99.4076% ( 1) 00:15:25.191 3980.705 - 4004.978: 99.9567% ( 76) 00:15:25.191 4004.978 - 4029.250: 99.9928% ( 5) 00:15:25.191 4975.881 - 5000.154: 100.0000% ( 1) 00:15:25.191 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:25.191 [2024-11-17 19:24:23.388602] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:25.191 [ 00:15:25.191 { 00:15:25.191 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:25.191 "subtype": "Discovery", 00:15:25.191 "listen_addresses": [], 00:15:25.191 "allow_any_host": true, 00:15:25.191 "hosts": [] 00:15:25.191 }, 00:15:25.191 { 00:15:25.191 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:25.191 "subtype": "NVMe", 00:15:25.191 "listen_addresses": [ 00:15:25.191 { 00:15:25.191 "transport": "VFIOUSER", 00:15:25.191 "trtype": "VFIOUSER", 00:15:25.191 "adrfam": "IPv4", 00:15:25.191 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:25.191 "trsvcid": "0" 00:15:25.191 } 00:15:25.191 ], 00:15:25.191 "allow_any_host": true, 00:15:25.191 "hosts": [], 00:15:25.191 "serial_number": "SPDK1", 00:15:25.191 "model_number": "SPDK bdev Controller", 00:15:25.191 "max_namespaces": 32, 00:15:25.191 "min_cntlid": 1, 00:15:25.191 "max_cntlid": 65519, 00:15:25.191 "namespaces": [ 00:15:25.191 { 00:15:25.191 "nsid": 1, 00:15:25.191 "bdev_name": "Malloc1", 00:15:25.191 "name": "Malloc1", 00:15:25.191 "nguid": "95ED9030A37E45C7BCC9B0860B819E8A", 00:15:25.191 "uuid": "95ed9030-a37e-45c7-bcc9-b0860b819e8a" 00:15:25.191 } 00:15:25.191 ] 00:15:25.191 }, 00:15:25.191 { 00:15:25.191 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:25.191 "subtype": "NVMe", 00:15:25.191 "listen_addresses": [ 00:15:25.191 { 00:15:25.191 "transport": "VFIOUSER", 00:15:25.191 "trtype": "VFIOUSER", 00:15:25.191 "adrfam": "IPv4", 00:15:25.191 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:25.191 "trsvcid": "0" 00:15:25.191 } 00:15:25.191 ], 00:15:25.191 "allow_any_host": true, 00:15:25.191 "hosts": [], 00:15:25.191 "serial_number": "SPDK2", 00:15:25.191 "model_number": "SPDK bdev Controller", 00:15:25.191 "max_namespaces": 32, 00:15:25.191 "min_cntlid": 1, 00:15:25.191 "max_cntlid": 65519, 00:15:25.191 "namespaces": [ 00:15:25.191 { 00:15:25.191 "nsid": 1, 00:15:25.191 "bdev_name": "Malloc2", 00:15:25.191 "name": "Malloc2", 00:15:25.191 "nguid": "7EAAB875929A44B7AD55E644D08D1269", 00:15:25.191 "uuid": "7eaab875-929a-44b7-ad55-e644d08d1269" 00:15:25.191 } 00:15:25.191 ] 00:15:25.191 } 00:15:25.191 ] 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1175435 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:25.191 19:24:23 -- common/autotest_common.sh@1254 -- # local i=0 00:15:25.191 19:24:23 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:25.191 19:24:23 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:25.191 19:24:23 -- common/autotest_common.sh@1265 -- # return 0 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:25.191 19:24:23 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:25.449 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.449 Malloc3 00:15:25.449 19:24:23 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:25.706 19:24:23 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:25.964 Asynchronous Event Request test 00:15:25.964 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.964 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.964 Registering asynchronous event callbacks... 00:15:25.964 Starting namespace attribute notice tests for all controllers... 00:15:25.964 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:25.964 aer_cb - Changed Namespace 00:15:25.964 Cleaning up... 00:15:25.964 [ 00:15:25.964 { 00:15:25.964 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:25.964 "subtype": "Discovery", 00:15:25.964 "listen_addresses": [], 00:15:25.964 "allow_any_host": true, 00:15:25.964 "hosts": [] 00:15:25.964 }, 00:15:25.964 { 00:15:25.964 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:25.964 "subtype": "NVMe", 00:15:25.964 "listen_addresses": [ 00:15:25.964 { 00:15:25.964 "transport": "VFIOUSER", 00:15:25.964 "trtype": "VFIOUSER", 00:15:25.964 "adrfam": "IPv4", 00:15:25.964 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:25.964 "trsvcid": "0" 00:15:25.964 } 00:15:25.964 ], 00:15:25.964 "allow_any_host": true, 00:15:25.964 "hosts": [], 00:15:25.964 "serial_number": "SPDK1", 00:15:25.964 "model_number": "SPDK bdev Controller", 00:15:25.964 "max_namespaces": 32, 00:15:25.964 "min_cntlid": 1, 00:15:25.964 "max_cntlid": 65519, 00:15:25.964 "namespaces": [ 00:15:25.964 { 00:15:25.964 "nsid": 1, 00:15:25.964 "bdev_name": "Malloc1", 00:15:25.964 "name": "Malloc1", 00:15:25.964 "nguid": "95ED9030A37E45C7BCC9B0860B819E8A", 00:15:25.964 "uuid": "95ed9030-a37e-45c7-bcc9-b0860b819e8a" 00:15:25.964 }, 00:15:25.964 { 00:15:25.964 "nsid": 2, 00:15:25.964 "bdev_name": "Malloc3", 00:15:25.964 "name": "Malloc3", 00:15:25.964 "nguid": "304D0B6675974B86B81067A511DF64E7", 00:15:25.964 "uuid": "304d0b66-7597-4b86-b810-67a511df64e7" 00:15:25.964 } 00:15:25.964 ] 00:15:25.964 }, 00:15:25.964 { 00:15:25.964 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:25.964 "subtype": "NVMe", 00:15:25.964 "listen_addresses": [ 00:15:25.964 { 00:15:25.964 "transport": "VFIOUSER", 00:15:25.964 "trtype": "VFIOUSER", 00:15:25.964 "adrfam": "IPv4", 00:15:25.964 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:25.964 "trsvcid": "0" 00:15:25.964 } 00:15:25.964 ], 00:15:25.964 "allow_any_host": true, 00:15:25.964 "hosts": [], 00:15:25.964 "serial_number": "SPDK2", 00:15:25.964 "model_number": "SPDK bdev Controller", 00:15:25.964 "max_namespaces": 32, 00:15:25.964 "min_cntlid": 1, 00:15:25.964 "max_cntlid": 65519, 00:15:25.964 "namespaces": [ 00:15:25.964 { 00:15:25.964 "nsid": 1, 00:15:25.964 "bdev_name": "Malloc2", 00:15:25.964 "name": "Malloc2", 00:15:25.964 "nguid": "7EAAB875929A44B7AD55E644D08D1269", 00:15:25.964 "uuid": "7eaab875-929a-44b7-ad55-e644d08d1269" 00:15:25.964 } 00:15:25.964 ] 00:15:25.964 } 00:15:25.964 ] 00:15:25.964 19:24:24 -- target/nvmf_vfio_user.sh@44 -- # wait 1175435 00:15:25.964 19:24:24 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:25.964 19:24:24 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:25.964 19:24:24 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:25.964 19:24:24 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:25.964 [2024-11-17 19:24:24.218689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:25.964 [2024-11-17 19:24:24.218735] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175455 ] 00:15:25.964 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.224 [2024-11-17 19:24:24.250867] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:26.224 [2024-11-17 19:24:24.259954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:26.224 [2024-11-17 19:24:24.259998] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5ca612a000 00:15:26.225 [2024-11-17 19:24:24.260955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.225 [2024-11-17 19:24:24.261956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.225 [2024-11-17 19:24:24.262983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.225 [2024-11-17 19:24:24.263990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:26.225 [2024-11-17 19:24:24.264992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:26.225 [2024-11-17 19:24:24.266002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.225 [2024-11-17 19:24:24.267008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:26.225 [2024-11-17 19:24:24.268032] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.225 [2024-11-17 19:24:24.269031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:26.225 [2024-11-17 19:24:24.269053] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5ca4e24000 00:15:26.225 [2024-11-17 19:24:24.270174] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:26.225 [2024-11-17 19:24:24.285329] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:26.225 [2024-11-17 19:24:24.285362] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:26.225 [2024-11-17 19:24:24.290451] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:26.225 [2024-11-17 19:24:24.290504] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:26.225 [2024-11-17 19:24:24.290588] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:26.225 [2024-11-17 19:24:24.290617] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:26.225 [2024-11-17 19:24:24.290629] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:26.225 [2024-11-17 19:24:24.291453] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:26.225 [2024-11-17 19:24:24.291473] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:26.225 [2024-11-17 19:24:24.291485] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:26.225 [2024-11-17 19:24:24.292468] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:26.225 [2024-11-17 19:24:24.292487] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:26.225 [2024-11-17 19:24:24.292501] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:26.225 [2024-11-17 19:24:24.293478] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:26.225 [2024-11-17 19:24:24.293502] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:26.225 [2024-11-17 19:24:24.294485] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:26.225 [2024-11-17 19:24:24.294505] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:26.225 [2024-11-17 19:24:24.294514] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:26.225 [2024-11-17 19:24:24.294526] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:26.225 [2024-11-17 19:24:24.294635] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:26.225 [2024-11-17 19:24:24.294642] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:26.225 [2024-11-17 19:24:24.294650] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:26.225 [2024-11-17 19:24:24.295488] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:26.225 [2024-11-17 19:24:24.296493] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:26.225 [2024-11-17 19:24:24.297506] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:26.225 [2024-11-17 19:24:24.298538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:26.225 [2024-11-17 19:24:24.299518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:26.225 [2024-11-17 19:24:24.299538] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:26.225 [2024-11-17 19:24:24.299547] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.299575] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:26.225 [2024-11-17 19:24:24.299588] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.299604] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:26.225 [2024-11-17 19:24:24.299614] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.225 [2024-11-17 19:24:24.299630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.225 [2024-11-17 19:24:24.305769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:26.225 [2024-11-17 19:24:24.305793] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:26.225 [2024-11-17 19:24:24.305802] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:26.225 [2024-11-17 19:24:24.305810] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:26.225 [2024-11-17 19:24:24.305818] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:26.225 [2024-11-17 19:24:24.305828] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:26.225 [2024-11-17 19:24:24.305837] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:26.225 [2024-11-17 19:24:24.305846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.305862] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.305880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:26.225 [2024-11-17 19:24:24.315683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:26.225 [2024-11-17 19:24:24.315739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.225 [2024-11-17 19:24:24.315755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.225 [2024-11-17 19:24:24.315767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.225 [2024-11-17 19:24:24.315779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.225 [2024-11-17 19:24:24.315788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.315804] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.315819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:26.225 [2024-11-17 19:24:24.323687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:26.225 [2024-11-17 19:24:24.323706] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:26.225 [2024-11-17 19:24:24.323715] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.323747] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.323762] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.323778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:26.225 [2024-11-17 19:24:24.331686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:26.225 [2024-11-17 19:24:24.331774] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.331791] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:26.225 [2024-11-17 19:24:24.331804] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:26.225 [2024-11-17 19:24:24.331813] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:26.225 [2024-11-17 19:24:24.331824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:26.225 [2024-11-17 19:24:24.339686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.339713] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:26.226 [2024-11-17 19:24:24.339747] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.339762] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.339775] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:26.226 [2024-11-17 19:24:24.339784] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.226 [2024-11-17 19:24:24.339794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.226 [2024-11-17 19:24:24.347686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.347739] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.347757] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.347770] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:26.226 [2024-11-17 19:24:24.347779] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.226 [2024-11-17 19:24:24.347789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.226 [2024-11-17 19:24:24.355691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.355713] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.355726] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.355754] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.355765] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.355774] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.355782] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:26.226 [2024-11-17 19:24:24.355790] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:26.226 [2024-11-17 19:24:24.355798] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:26.226 [2024-11-17 19:24:24.355822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:26.226 [2024-11-17 19:24:24.363686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.363713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:26.226 [2024-11-17 19:24:24.371688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.371734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:26.226 [2024-11-17 19:24:24.379687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.379712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:26.226 [2024-11-17 19:24:24.387683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.387736] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:26.226 [2024-11-17 19:24:24.387747] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:26.226 [2024-11-17 19:24:24.387754] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:26.226 [2024-11-17 19:24:24.387760] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:26.226 [2024-11-17 19:24:24.387770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:26.226 [2024-11-17 19:24:24.387781] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:26.226 [2024-11-17 19:24:24.387790] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:26.226 [2024-11-17 19:24:24.387799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:26.226 [2024-11-17 19:24:24.387809] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:26.226 [2024-11-17 19:24:24.387817] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.226 [2024-11-17 19:24:24.387826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.226 [2024-11-17 19:24:24.387837] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:26.226 [2024-11-17 19:24:24.387845] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:26.226 [2024-11-17 19:24:24.387859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:26.226 [2024-11-17 19:24:24.395686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.395716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.395732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:26.226 [2024-11-17 19:24:24.395745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:26.226 ===================================================== 00:15:26.226 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.226 ===================================================== 00:15:26.226 Controller Capabilities/Features 00:15:26.226 ================================ 00:15:26.226 Vendor ID: 4e58 00:15:26.226 Subsystem Vendor ID: 4e58 00:15:26.226 Serial Number: SPDK2 00:15:26.226 Model Number: SPDK bdev Controller 00:15:26.226 Firmware Version: 24.01.1 00:15:26.226 Recommended Arb Burst: 6 00:15:26.226 IEEE OUI Identifier: 8d 6b 50 00:15:26.226 Multi-path I/O 00:15:26.226 May have multiple subsystem ports: Yes 00:15:26.226 May have multiple controllers: Yes 00:15:26.226 Associated with SR-IOV VF: No 00:15:26.226 Max Data Transfer Size: 131072 00:15:26.226 Max Number of Namespaces: 32 00:15:26.226 Max Number of I/O Queues: 127 00:15:26.226 NVMe Specification Version (VS): 1.3 00:15:26.226 NVMe Specification Version (Identify): 1.3 00:15:26.226 Maximum Queue Entries: 256 00:15:26.226 Contiguous Queues Required: Yes 00:15:26.226 Arbitration Mechanisms Supported 00:15:26.226 Weighted Round Robin: Not Supported 00:15:26.226 Vendor Specific: Not Supported 00:15:26.226 Reset Timeout: 15000 ms 00:15:26.226 Doorbell Stride: 4 bytes 00:15:26.226 NVM Subsystem Reset: Not Supported 00:15:26.226 Command Sets Supported 00:15:26.226 NVM Command Set: Supported 00:15:26.226 Boot Partition: Not Supported 00:15:26.226 Memory Page Size Minimum: 4096 bytes 00:15:26.226 Memory Page Size Maximum: 4096 bytes 00:15:26.226 Persistent Memory Region: Not Supported 00:15:26.226 Optional Asynchronous Events Supported 00:15:26.226 Namespace Attribute Notices: Supported 00:15:26.226 Firmware Activation Notices: Not Supported 00:15:26.226 ANA Change Notices: Not Supported 00:15:26.226 PLE Aggregate Log Change Notices: Not Supported 00:15:26.226 LBA Status Info Alert Notices: Not Supported 00:15:26.226 EGE Aggregate Log Change Notices: Not Supported 00:15:26.226 Normal NVM Subsystem Shutdown event: Not Supported 00:15:26.226 Zone Descriptor Change Notices: Not Supported 00:15:26.226 Discovery Log Change Notices: Not Supported 00:15:26.226 Controller Attributes 00:15:26.226 128-bit Host Identifier: Supported 00:15:26.226 Non-Operational Permissive Mode: Not Supported 00:15:26.226 NVM Sets: Not Supported 00:15:26.226 Read Recovery Levels: Not Supported 00:15:26.226 Endurance Groups: Not Supported 00:15:26.226 Predictable Latency Mode: Not Supported 00:15:26.226 Traffic Based Keep ALive: Not Supported 00:15:26.226 Namespace Granularity: Not Supported 00:15:26.226 SQ Associations: Not Supported 00:15:26.226 UUID List: Not Supported 00:15:26.226 Multi-Domain Subsystem: Not Supported 00:15:26.226 Fixed Capacity Management: Not Supported 00:15:26.226 Variable Capacity Management: Not Supported 00:15:26.226 Delete Endurance Group: Not Supported 00:15:26.226 Delete NVM Set: Not Supported 00:15:26.226 Extended LBA Formats Supported: Not Supported 00:15:26.226 Flexible Data Placement Supported: Not Supported 00:15:26.226 00:15:26.226 Controller Memory Buffer Support 00:15:26.226 ================================ 00:15:26.226 Supported: No 00:15:26.226 00:15:26.226 Persistent Memory Region Support 00:15:26.226 ================================ 00:15:26.226 Supported: No 00:15:26.226 00:15:26.226 Admin Command Set Attributes 00:15:26.226 ============================ 00:15:26.226 Security Send/Receive: Not Supported 00:15:26.226 Format NVM: Not Supported 00:15:26.226 Firmware Activate/Download: Not Supported 00:15:26.226 Namespace Management: Not Supported 00:15:26.226 Device Self-Test: Not Supported 00:15:26.226 Directives: Not Supported 00:15:26.227 NVMe-MI: Not Supported 00:15:26.227 Virtualization Management: Not Supported 00:15:26.227 Doorbell Buffer Config: Not Supported 00:15:26.227 Get LBA Status Capability: Not Supported 00:15:26.227 Command & Feature Lockdown Capability: Not Supported 00:15:26.227 Abort Command Limit: 4 00:15:26.227 Async Event Request Limit: 4 00:15:26.227 Number of Firmware Slots: N/A 00:15:26.227 Firmware Slot 1 Read-Only: N/A 00:15:26.227 Firmware Activation Without Reset: N/A 00:15:26.227 Multiple Update Detection Support: N/A 00:15:26.227 Firmware Update Granularity: No Information Provided 00:15:26.227 Per-Namespace SMART Log: No 00:15:26.227 Asymmetric Namespace Access Log Page: Not Supported 00:15:26.227 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:26.227 Command Effects Log Page: Supported 00:15:26.227 Get Log Page Extended Data: Supported 00:15:26.227 Telemetry Log Pages: Not Supported 00:15:26.227 Persistent Event Log Pages: Not Supported 00:15:26.227 Supported Log Pages Log Page: May Support 00:15:26.227 Commands Supported & Effects Log Page: Not Supported 00:15:26.227 Feature Identifiers & Effects Log Page:May Support 00:15:26.227 NVMe-MI Commands & Effects Log Page: May Support 00:15:26.227 Data Area 4 for Telemetry Log: Not Supported 00:15:26.227 Error Log Page Entries Supported: 128 00:15:26.227 Keep Alive: Supported 00:15:26.227 Keep Alive Granularity: 10000 ms 00:15:26.227 00:15:26.227 NVM Command Set Attributes 00:15:26.227 ========================== 00:15:26.227 Submission Queue Entry Size 00:15:26.227 Max: 64 00:15:26.227 Min: 64 00:15:26.227 Completion Queue Entry Size 00:15:26.227 Max: 16 00:15:26.227 Min: 16 00:15:26.227 Number of Namespaces: 32 00:15:26.227 Compare Command: Supported 00:15:26.227 Write Uncorrectable Command: Not Supported 00:15:26.227 Dataset Management Command: Supported 00:15:26.227 Write Zeroes Command: Supported 00:15:26.227 Set Features Save Field: Not Supported 00:15:26.227 Reservations: Not Supported 00:15:26.227 Timestamp: Not Supported 00:15:26.227 Copy: Supported 00:15:26.227 Volatile Write Cache: Present 00:15:26.227 Atomic Write Unit (Normal): 1 00:15:26.227 Atomic Write Unit (PFail): 1 00:15:26.227 Atomic Compare & Write Unit: 1 00:15:26.227 Fused Compare & Write: Supported 00:15:26.227 Scatter-Gather List 00:15:26.227 SGL Command Set: Supported (Dword aligned) 00:15:26.227 SGL Keyed: Not Supported 00:15:26.227 SGL Bit Bucket Descriptor: Not Supported 00:15:26.227 SGL Metadata Pointer: Not Supported 00:15:26.227 Oversized SGL: Not Supported 00:15:26.227 SGL Metadata Address: Not Supported 00:15:26.227 SGL Offset: Not Supported 00:15:26.227 Transport SGL Data Block: Not Supported 00:15:26.227 Replay Protected Memory Block: Not Supported 00:15:26.227 00:15:26.227 Firmware Slot Information 00:15:26.227 ========================= 00:15:26.227 Active slot: 1 00:15:26.227 Slot 1 Firmware Revision: 24.01.1 00:15:26.227 00:15:26.227 00:15:26.227 Commands Supported and Effects 00:15:26.227 ============================== 00:15:26.227 Admin Commands 00:15:26.227 -------------- 00:15:26.227 Get Log Page (02h): Supported 00:15:26.227 Identify (06h): Supported 00:15:26.227 Abort (08h): Supported 00:15:26.227 Set Features (09h): Supported 00:15:26.227 Get Features (0Ah): Supported 00:15:26.227 Asynchronous Event Request (0Ch): Supported 00:15:26.227 Keep Alive (18h): Supported 00:15:26.227 I/O Commands 00:15:26.227 ------------ 00:15:26.227 Flush (00h): Supported LBA-Change 00:15:26.227 Write (01h): Supported LBA-Change 00:15:26.227 Read (02h): Supported 00:15:26.227 Compare (05h): Supported 00:15:26.227 Write Zeroes (08h): Supported LBA-Change 00:15:26.227 Dataset Management (09h): Supported LBA-Change 00:15:26.227 Copy (19h): Supported LBA-Change 00:15:26.227 Unknown (79h): Supported LBA-Change 00:15:26.227 Unknown (7Ah): Supported 00:15:26.227 00:15:26.227 Error Log 00:15:26.227 ========= 00:15:26.227 00:15:26.227 Arbitration 00:15:26.227 =========== 00:15:26.227 Arbitration Burst: 1 00:15:26.227 00:15:26.227 Power Management 00:15:26.227 ================ 00:15:26.227 Number of Power States: 1 00:15:26.227 Current Power State: Power State #0 00:15:26.227 Power State #0: 00:15:26.227 Max Power: 0.00 W 00:15:26.227 Non-Operational State: Operational 00:15:26.227 Entry Latency: Not Reported 00:15:26.227 Exit Latency: Not Reported 00:15:26.227 Relative Read Throughput: 0 00:15:26.227 Relative Read Latency: 0 00:15:26.227 Relative Write Throughput: 0 00:15:26.227 Relative Write Latency: 0 00:15:26.227 Idle Power: Not Reported 00:15:26.227 Active Power: Not Reported 00:15:26.227 Non-Operational Permissive Mode: Not Supported 00:15:26.227 00:15:26.227 Health Information 00:15:26.227 ================== 00:15:26.227 Critical Warnings: 00:15:26.227 Available Spare Space: OK 00:15:26.227 Temperature: OK 00:15:26.227 Device Reliability: OK 00:15:26.227 Read Only: No 00:15:26.227 Volatile Memory Backup: OK 00:15:26.227 Current Temperature: 0 Kelvin[2024-11-17 19:24:24.395867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:26.227 [2024-11-17 19:24:24.403684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:26.227 [2024-11-17 19:24:24.403754] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:26.227 [2024-11-17 19:24:24.403773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.227 [2024-11-17 19:24:24.403784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.227 [2024-11-17 19:24:24.403794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.227 [2024-11-17 19:24:24.403804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.227 [2024-11-17 19:24:24.403883] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:26.227 [2024-11-17 19:24:24.403905] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:26.227 [2024-11-17 19:24:24.404931] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:26.227 [2024-11-17 19:24:24.404946] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:26.227 [2024-11-17 19:24:24.405896] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:26.227 [2024-11-17 19:24:24.405920] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:26.227 [2024-11-17 19:24:24.405984] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:26.227 [2024-11-17 19:24:24.407158] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:26.227 (-273 Celsius) 00:15:26.227 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:26.227 Available Spare: 0% 00:15:26.227 Available Spare Threshold: 0% 00:15:26.227 Life Percentage Used: 0% 00:15:26.227 Data Units Read: 0 00:15:26.227 Data Units Written: 0 00:15:26.227 Host Read Commands: 0 00:15:26.227 Host Write Commands: 0 00:15:26.227 Controller Busy Time: 0 minutes 00:15:26.227 Power Cycles: 0 00:15:26.227 Power On Hours: 0 hours 00:15:26.227 Unsafe Shutdowns: 0 00:15:26.227 Unrecoverable Media Errors: 0 00:15:26.227 Lifetime Error Log Entries: 0 00:15:26.227 Warning Temperature Time: 0 minutes 00:15:26.227 Critical Temperature Time: 0 minutes 00:15:26.227 00:15:26.227 Number of Queues 00:15:26.227 ================ 00:15:26.227 Number of I/O Submission Queues: 127 00:15:26.227 Number of I/O Completion Queues: 127 00:15:26.227 00:15:26.227 Active Namespaces 00:15:26.227 ================= 00:15:26.227 Namespace ID:1 00:15:26.227 Error Recovery Timeout: Unlimited 00:15:26.227 Command Set Identifier: NVM (00h) 00:15:26.227 Deallocate: Supported 00:15:26.227 Deallocated/Unwritten Error: Not Supported 00:15:26.227 Deallocated Read Value: Unknown 00:15:26.227 Deallocate in Write Zeroes: Not Supported 00:15:26.227 Deallocated Guard Field: 0xFFFF 00:15:26.227 Flush: Supported 00:15:26.227 Reservation: Supported 00:15:26.227 Namespace Sharing Capabilities: Multiple Controllers 00:15:26.227 Size (in LBAs): 131072 (0GiB) 00:15:26.227 Capacity (in LBAs): 131072 (0GiB) 00:15:26.227 Utilization (in LBAs): 131072 (0GiB) 00:15:26.227 NGUID: 7EAAB875929A44B7AD55E644D08D1269 00:15:26.227 UUID: 7eaab875-929a-44b7-ad55-e644d08d1269 00:15:26.227 Thin Provisioning: Not Supported 00:15:26.227 Per-NS Atomic Units: Yes 00:15:26.227 Atomic Boundary Size (Normal): 0 00:15:26.227 Atomic Boundary Size (PFail): 0 00:15:26.227 Atomic Boundary Offset: 0 00:15:26.227 Maximum Single Source Range Length: 65535 00:15:26.228 Maximum Copy Length: 65535 00:15:26.228 Maximum Source Range Count: 1 00:15:26.228 NGUID/EUI64 Never Reused: No 00:15:26.228 Namespace Write Protected: No 00:15:26.228 Number of LBA Formats: 1 00:15:26.228 Current LBA Format: LBA Format #00 00:15:26.228 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:26.228 00:15:26.228 19:24:24 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:26.228 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.794 Initializing NVMe Controllers 00:15:32.794 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:32.794 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:32.794 Initialization complete. Launching workers. 00:15:32.794 ======================================================== 00:15:32.794 Latency(us) 00:15:32.794 Device Information : IOPS MiB/s Average min max 00:15:32.794 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 38477.40 150.30 3326.56 1132.96 9462.74 00:15:32.794 ======================================================== 00:15:32.794 Total : 38477.40 150.30 3326.56 1132.96 9462.74 00:15:32.794 00:15:32.794 19:24:29 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:32.794 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.983 Initializing NVMe Controllers 00:15:36.983 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.983 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:36.983 Initialization complete. Launching workers. 00:15:36.983 ======================================================== 00:15:36.983 Latency(us) 00:15:36.983 Device Information : IOPS MiB/s Average min max 00:15:36.983 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36968.33 144.41 3462.31 1145.58 7281.63 00:15:36.983 ======================================================== 00:15:36.983 Total : 36968.33 144.41 3462.31 1145.58 7281.63 00:15:36.983 00:15:36.983 19:24:35 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:36.983 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.277 Initializing NVMe Controllers 00:15:42.277 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:42.277 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:42.277 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:42.277 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:42.277 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:42.277 Initialization complete. Launching workers. 00:15:42.277 Starting thread on core 2 00:15:42.277 Starting thread on core 3 00:15:42.277 Starting thread on core 1 00:15:42.277 19:24:40 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:42.277 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.603 Initializing NVMe Controllers 00:15:45.603 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.603 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.603 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:45.603 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:45.603 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:45.603 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:45.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:45.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:45.603 Initialization complete. Launching workers. 00:15:45.603 Starting thread on core 1 with urgent priority queue 00:15:45.603 Starting thread on core 2 with urgent priority queue 00:15:45.603 Starting thread on core 3 with urgent priority queue 00:15:45.603 Starting thread on core 0 with urgent priority queue 00:15:45.603 SPDK bdev Controller (SPDK2 ) core 0: 5965.00 IO/s 16.76 secs/100000 ios 00:15:45.603 SPDK bdev Controller (SPDK2 ) core 1: 6445.00 IO/s 15.52 secs/100000 ios 00:15:45.603 SPDK bdev Controller (SPDK2 ) core 2: 6267.33 IO/s 15.96 secs/100000 ios 00:15:45.603 SPDK bdev Controller (SPDK2 ) core 3: 6430.33 IO/s 15.55 secs/100000 ios 00:15:45.603 ======================================================== 00:15:45.603 00:15:45.603 19:24:43 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:45.603 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.861 Initializing NVMe Controllers 00:15:45.861 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.861 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.861 Namespace ID: 1 size: 0GB 00:15:45.861 Initialization complete. 00:15:45.861 INFO: using host memory buffer for IO 00:15:45.861 Hello world! 00:15:45.861 19:24:44 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:45.861 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.235 Initializing NVMe Controllers 00:15:47.235 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:47.235 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:47.235 Initialization complete. Launching workers. 00:15:47.235 submit (in ns) avg, min, max = 7098.4, 3513.3, 4015418.9 00:15:47.235 complete (in ns) avg, min, max = 22546.9, 2033.3, 4016074.4 00:15:47.235 00:15:47.235 Submit histogram 00:15:47.235 ================ 00:15:47.235 Range in us Cumulative Count 00:15:47.235 3.508 - 3.532: 0.2588% ( 36) 00:15:47.235 3.532 - 3.556: 1.0352% ( 108) 00:15:47.235 3.556 - 3.579: 3.2495% ( 308) 00:15:47.235 3.579 - 3.603: 7.8648% ( 642) 00:15:47.235 3.603 - 3.627: 15.4277% ( 1052) 00:15:47.235 3.627 - 3.650: 26.0388% ( 1476) 00:15:47.235 3.650 - 3.674: 36.4989% ( 1455) 00:15:47.235 3.674 - 3.698: 45.2480% ( 1217) 00:15:47.235 3.698 - 3.721: 52.9691% ( 1074) 00:15:47.235 3.721 - 3.745: 58.6413% ( 789) 00:15:47.235 3.745 - 3.769: 63.2207% ( 637) 00:15:47.235 3.769 - 3.793: 67.5845% ( 607) 00:15:47.235 3.793 - 3.816: 70.8411% ( 453) 00:15:47.235 3.816 - 3.840: 73.9540% ( 433) 00:15:47.235 3.840 - 3.864: 77.2035% ( 452) 00:15:47.235 3.864 - 3.887: 80.6973% ( 486) 00:15:47.235 3.887 - 3.911: 84.1553% ( 481) 00:15:47.235 3.911 - 3.935: 86.9087% ( 383) 00:15:47.235 3.935 - 3.959: 88.8066% ( 264) 00:15:47.235 3.959 - 3.982: 90.3882% ( 220) 00:15:47.235 3.982 - 4.006: 91.8476% ( 203) 00:15:47.235 4.006 - 4.030: 93.0554% ( 168) 00:15:47.235 4.030 - 4.053: 94.0187% ( 134) 00:15:47.235 4.053 - 4.077: 94.6441% ( 87) 00:15:47.235 4.077 - 4.101: 95.1977% ( 77) 00:15:47.235 4.101 - 4.124: 95.5715% ( 52) 00:15:47.235 4.124 - 4.148: 95.8088% ( 33) 00:15:47.235 4.148 - 4.172: 96.0244% ( 30) 00:15:47.235 4.172 - 4.196: 96.1538% ( 18) 00:15:47.235 4.196 - 4.219: 96.2545% ( 14) 00:15:47.235 4.219 - 4.243: 96.3048% ( 7) 00:15:47.235 4.243 - 4.267: 96.3983% ( 13) 00:15:47.235 4.267 - 4.290: 96.4702% ( 10) 00:15:47.235 4.290 - 4.314: 96.5708% ( 14) 00:15:47.235 4.314 - 4.338: 96.6715% ( 14) 00:15:47.235 4.338 - 4.361: 96.7505% ( 11) 00:15:47.235 4.361 - 4.385: 96.8081% ( 8) 00:15:47.235 4.385 - 4.409: 96.8799% ( 10) 00:15:47.235 4.409 - 4.433: 96.9950% ( 16) 00:15:47.235 4.433 - 4.456: 97.0597% ( 9) 00:15:47.235 4.456 - 4.480: 97.1100% ( 7) 00:15:47.235 4.480 - 4.504: 97.1387% ( 4) 00:15:47.235 4.504 - 4.527: 97.1891% ( 7) 00:15:47.235 4.527 - 4.551: 97.2106% ( 3) 00:15:47.235 4.551 - 4.575: 97.2322% ( 3) 00:15:47.235 4.575 - 4.599: 97.2538% ( 3) 00:15:47.235 4.599 - 4.622: 97.2610% ( 1) 00:15:47.235 4.622 - 4.646: 97.2753% ( 2) 00:15:47.235 4.646 - 4.670: 97.2897% ( 2) 00:15:47.235 4.670 - 4.693: 97.2969% ( 1) 00:15:47.235 4.693 - 4.717: 97.3113% ( 2) 00:15:47.235 4.788 - 4.812: 97.3257% ( 2) 00:15:47.235 4.812 - 4.836: 97.3329% ( 1) 00:15:47.235 4.836 - 4.859: 97.3832% ( 7) 00:15:47.235 4.859 - 4.883: 97.4047% ( 3) 00:15:47.235 4.883 - 4.907: 97.4551% ( 7) 00:15:47.235 4.907 - 4.930: 97.5413% ( 12) 00:15:47.235 4.930 - 4.954: 97.5845% ( 6) 00:15:47.235 4.954 - 4.978: 97.6420% ( 8) 00:15:47.235 4.978 - 5.001: 97.7354% ( 13) 00:15:47.235 5.001 - 5.025: 97.7786% ( 6) 00:15:47.235 5.025 - 5.049: 97.8145% ( 5) 00:15:47.235 5.049 - 5.073: 97.8792% ( 9) 00:15:47.235 5.073 - 5.096: 97.9008% ( 3) 00:15:47.235 5.096 - 5.120: 97.9367% ( 5) 00:15:47.235 5.120 - 5.144: 97.9655% ( 4) 00:15:47.235 5.144 - 5.167: 98.0086% ( 6) 00:15:47.235 5.167 - 5.191: 98.0661% ( 8) 00:15:47.235 5.191 - 5.215: 98.0805% ( 2) 00:15:47.235 5.215 - 5.239: 98.1093% ( 4) 00:15:47.235 5.239 - 5.262: 98.1452% ( 5) 00:15:47.235 5.262 - 5.286: 98.1596% ( 2) 00:15:47.235 5.310 - 5.333: 98.1740% ( 2) 00:15:47.235 5.333 - 5.357: 98.2027% ( 4) 00:15:47.235 5.381 - 5.404: 98.2171% ( 2) 00:15:47.235 5.404 - 5.428: 98.2315% ( 2) 00:15:47.235 5.476 - 5.499: 98.2387% ( 1) 00:15:47.236 5.499 - 5.523: 98.2459% ( 1) 00:15:47.236 5.523 - 5.547: 98.2746% ( 4) 00:15:47.236 5.570 - 5.594: 98.2890% ( 2) 00:15:47.236 5.594 - 5.618: 98.2962% ( 1) 00:15:47.236 5.689 - 5.713: 98.3034% ( 1) 00:15:47.236 5.736 - 5.760: 98.3178% ( 2) 00:15:47.236 5.855 - 5.879: 98.3321% ( 2) 00:15:47.236 5.997 - 6.021: 98.3465% ( 2) 00:15:47.236 6.044 - 6.068: 98.3537% ( 1) 00:15:47.236 6.116 - 6.163: 98.3609% ( 1) 00:15:47.236 6.210 - 6.258: 98.3681% ( 1) 00:15:47.236 6.258 - 6.305: 98.3753% ( 1) 00:15:47.236 6.305 - 6.353: 98.3896% ( 2) 00:15:47.236 6.400 - 6.447: 98.3968% ( 1) 00:15:47.236 6.495 - 6.542: 98.4040% ( 1) 00:15:47.236 6.542 - 6.590: 98.4112% ( 1) 00:15:47.236 6.969 - 7.016: 98.4184% ( 1) 00:15:47.236 7.064 - 7.111: 98.4256% ( 1) 00:15:47.236 7.206 - 7.253: 98.4328% ( 1) 00:15:47.236 7.253 - 7.301: 98.4400% ( 1) 00:15:47.236 7.301 - 7.348: 98.4472% ( 1) 00:15:47.236 7.443 - 7.490: 98.4543% ( 1) 00:15:47.236 7.538 - 7.585: 98.4687% ( 2) 00:15:47.236 7.585 - 7.633: 98.4903% ( 3) 00:15:47.236 7.633 - 7.680: 98.4975% ( 1) 00:15:47.236 7.680 - 7.727: 98.5047% ( 1) 00:15:47.236 7.775 - 7.822: 98.5262% ( 3) 00:15:47.236 7.870 - 7.917: 98.5334% ( 1) 00:15:47.236 7.917 - 7.964: 98.5478% ( 2) 00:15:47.236 7.964 - 8.012: 98.5550% ( 1) 00:15:47.236 8.059 - 8.107: 98.5622% ( 1) 00:15:47.236 8.154 - 8.201: 98.5766% ( 2) 00:15:47.236 8.201 - 8.249: 98.5838% ( 1) 00:15:47.236 8.391 - 8.439: 98.5981% ( 2) 00:15:47.236 8.486 - 8.533: 98.6125% ( 2) 00:15:47.236 8.628 - 8.676: 98.6197% ( 1) 00:15:47.236 8.723 - 8.770: 98.6269% ( 1) 00:15:47.236 8.770 - 8.818: 98.6341% ( 1) 00:15:47.236 8.865 - 8.913: 98.6485% ( 2) 00:15:47.236 8.913 - 8.960: 98.6556% ( 1) 00:15:47.236 8.960 - 9.007: 98.6772% ( 3) 00:15:47.236 9.055 - 9.102: 98.6844% ( 1) 00:15:47.236 9.197 - 9.244: 98.6916% ( 1) 00:15:47.236 9.339 - 9.387: 98.6988% ( 1) 00:15:47.236 9.387 - 9.434: 98.7060% ( 1) 00:15:47.236 9.529 - 9.576: 98.7132% ( 1) 00:15:47.236 9.671 - 9.719: 98.7275% ( 2) 00:15:47.236 9.861 - 9.908: 98.7347% ( 1) 00:15:47.236 9.908 - 9.956: 98.7491% ( 2) 00:15:47.236 9.956 - 10.003: 98.7635% ( 2) 00:15:47.236 10.003 - 10.050: 98.7707% ( 1) 00:15:47.236 10.098 - 10.145: 98.7779% ( 1) 00:15:47.236 10.145 - 10.193: 98.7850% ( 1) 00:15:47.236 10.193 - 10.240: 98.7922% ( 1) 00:15:47.236 10.240 - 10.287: 98.7994% ( 1) 00:15:47.236 10.287 - 10.335: 98.8138% ( 2) 00:15:47.236 10.382 - 10.430: 98.8210% ( 1) 00:15:47.236 10.524 - 10.572: 98.8282% ( 1) 00:15:47.236 10.619 - 10.667: 98.8354% ( 1) 00:15:47.236 10.714 - 10.761: 98.8497% ( 2) 00:15:47.236 10.809 - 10.856: 98.8641% ( 2) 00:15:47.236 10.904 - 10.951: 98.8713% ( 1) 00:15:47.236 10.999 - 11.046: 98.8785% ( 1) 00:15:47.236 11.093 - 11.141: 98.8857% ( 1) 00:15:47.236 11.141 - 11.188: 98.8929% ( 1) 00:15:47.236 11.236 - 11.283: 98.9001% ( 1) 00:15:47.236 11.283 - 11.330: 98.9073% ( 1) 00:15:47.236 11.330 - 11.378: 98.9145% ( 1) 00:15:47.236 11.378 - 11.425: 98.9216% ( 1) 00:15:47.236 11.425 - 11.473: 98.9360% ( 2) 00:15:47.236 11.520 - 11.567: 98.9432% ( 1) 00:15:47.236 11.710 - 11.757: 98.9504% ( 1) 00:15:47.236 11.757 - 11.804: 98.9576% ( 1) 00:15:47.236 11.947 - 11.994: 98.9648% ( 1) 00:15:47.236 12.326 - 12.421: 98.9720% ( 1) 00:15:47.236 12.800 - 12.895: 98.9792% ( 1) 00:15:47.236 13.274 - 13.369: 98.9863% ( 1) 00:15:47.236 13.748 - 13.843: 98.9935% ( 1) 00:15:47.236 14.033 - 14.127: 99.0079% ( 2) 00:15:47.236 14.127 - 14.222: 99.0151% ( 1) 00:15:47.236 14.412 - 14.507: 99.0223% ( 1) 00:15:47.236 14.886 - 14.981: 99.0295% ( 1) 00:15:47.236 14.981 - 15.076: 99.0367% ( 1) 00:15:47.236 15.644 - 15.739: 99.0439% ( 1) 00:15:47.236 16.593 - 16.687: 99.0510% ( 1) 00:15:47.236 17.067 - 17.161: 99.0654% ( 2) 00:15:47.236 17.161 - 17.256: 99.1014% ( 5) 00:15:47.236 17.256 - 17.351: 99.1086% ( 1) 00:15:47.236 17.351 - 17.446: 99.1373% ( 4) 00:15:47.236 17.446 - 17.541: 99.1733% ( 5) 00:15:47.236 17.541 - 17.636: 99.1948% ( 3) 00:15:47.236 17.636 - 17.730: 99.2523% ( 8) 00:15:47.236 17.730 - 17.825: 99.3386% ( 12) 00:15:47.236 17.825 - 17.920: 99.3961% ( 8) 00:15:47.236 17.920 - 18.015: 99.4608% ( 9) 00:15:47.236 18.015 - 18.110: 99.4896% ( 4) 00:15:47.236 18.110 - 18.204: 99.5543% ( 9) 00:15:47.236 18.204 - 18.299: 99.5974% ( 6) 00:15:47.236 18.299 - 18.394: 99.6693% ( 10) 00:15:47.236 18.394 - 18.489: 99.6981% ( 4) 00:15:47.236 18.489 - 18.584: 99.7484% ( 7) 00:15:47.236 18.584 - 18.679: 99.7699% ( 3) 00:15:47.236 18.679 - 18.773: 99.8059% ( 5) 00:15:47.236 18.773 - 18.868: 99.8131% ( 1) 00:15:47.236 18.868 - 18.963: 99.8203% ( 1) 00:15:47.236 19.058 - 19.153: 99.8347% ( 2) 00:15:47.236 19.153 - 19.247: 99.8418% ( 1) 00:15:47.236 19.532 - 19.627: 99.8490% ( 1) 00:15:47.236 20.101 - 20.196: 99.8562% ( 1) 00:15:47.236 23.419 - 23.514: 99.8634% ( 1) 00:15:47.236 23.514 - 23.609: 99.8706% ( 1) 00:15:47.236 23.609 - 23.704: 99.8778% ( 1) 00:15:47.236 24.083 - 24.178: 99.8850% ( 1) 00:15:47.236 24.273 - 24.462: 99.8922% ( 1) 00:15:47.236 25.221 - 25.410: 99.8994% ( 1) 00:15:47.236 25.410 - 25.600: 99.9065% ( 1) 00:15:47.236 28.824 - 29.013: 99.9137% ( 1) 00:15:47.236 35.271 - 35.461: 99.9209% ( 1) 00:15:47.236 3980.705 - 4004.978: 99.9928% ( 10) 00:15:47.236 4004.978 - 4029.250: 100.0000% ( 1) 00:15:47.236 00:15:47.236 Complete histogram 00:15:47.236 ================== 00:15:47.236 Range in us Cumulative Count 00:15:47.236 2.027 - 2.039: 0.2013% ( 28) 00:15:47.236 2.039 - 2.050: 11.3156% ( 1546) 00:15:47.236 2.050 - 2.062: 17.4694% ( 856) 00:15:47.236 2.062 - 2.074: 22.2789% ( 669) 00:15:47.236 2.074 - 2.086: 54.3063% ( 4455) 00:15:47.236 2.086 - 2.098: 63.5370% ( 1284) 00:15:47.236 2.098 - 2.110: 66.1682% ( 366) 00:15:47.236 2.110 - 2.121: 70.8124% ( 646) 00:15:47.236 2.121 - 2.133: 71.5097% ( 97) 00:15:47.236 2.133 - 2.145: 77.2466% ( 798) 00:15:47.236 2.145 - 2.157: 88.0302% ( 1500) 00:15:47.236 2.157 - 2.169: 90.3523% ( 323) 00:15:47.236 2.169 - 2.181: 91.5672% ( 169) 00:15:47.236 2.181 - 2.193: 92.9619% ( 194) 00:15:47.236 2.193 - 2.204: 93.2495% ( 40) 00:15:47.236 2.204 - 2.216: 94.3063% ( 147) 00:15:47.236 2.216 - 2.228: 95.3918% ( 151) 00:15:47.236 2.228 - 2.240: 95.5068% ( 16) 00:15:47.236 2.240 - 2.252: 95.6075% ( 14) 00:15:47.236 2.252 - 2.264: 95.7081% ( 14) 00:15:47.236 2.264 - 2.276: 95.7584% ( 7) 00:15:47.236 2.276 - 2.287: 95.9022% ( 20) 00:15:47.237 2.287 - 2.299: 95.9957% ( 13) 00:15:47.237 2.299 - 2.311: 96.0604% ( 9) 00:15:47.237 2.311 - 2.323: 96.1107% ( 7) 00:15:47.237 2.323 - 2.335: 96.2473% ( 19) 00:15:47.237 2.335 - 2.347: 96.3336% ( 12) 00:15:47.237 2.347 - 2.359: 96.5277% ( 27) 00:15:47.237 2.359 - 2.370: 96.8152% ( 40) 00:15:47.237 2.370 - 2.382: 96.9950% ( 25) 00:15:47.237 2.382 - 2.394: 97.1891% ( 27) 00:15:47.237 2.394 - 2.406: 97.3329% ( 20) 00:15:47.237 2.406 - 2.418: 97.4623% ( 18) 00:15:47.237 2.418 - 2.430: 97.5485% ( 12) 00:15:47.237 2.430 - 2.441: 97.6204% ( 10) 00:15:47.237 2.441 - 2.453: 97.6995% ( 11) 00:15:47.237 2.453 - 2.465: 97.7283% ( 4) 00:15:47.237 2.465 - 2.477: 97.7426% ( 2) 00:15:47.237 2.477 - 2.489: 97.7498% ( 1) 00:15:47.237 2.489 - 2.501: 97.7786% ( 4) 00:15:47.237 2.501 - 2.513: 97.8361% ( 8) 00:15:47.237 2.513 - 2.524: 97.9367% ( 14) 00:15:47.237 2.524 - 2.536: 98.0230% ( 12) 00:15:47.237 2.536 - 2.548: 98.1021% ( 11) 00:15:47.237 2.548 - 2.560: 98.1812% ( 11) 00:15:47.237 2.560 - 2.572: 98.2099% ( 4) 00:15:47.237 2.572 - 2.584: 98.2746% ( 9) 00:15:47.237 2.584 - 2.596: 98.2818% ( 1) 00:15:47.237 2.596 - 2.607: 98.3034% ( 3) 00:15:47.237 2.607 - 2.619: 98.3106% ( 1) 00:15:47.237 2.619 - 2.631: 98.3249% ( 2) 00:15:47.237 2.631 - 2.643: 98.3321% ( 1) 00:15:47.237 2.643 - 2.655: 98.3465% ( 2) 00:15:47.237 2.655 - 2.667: 98.3537% ( 1) 00:15:47.237 2.679 - 2.690: 98.3609% ( 1) 00:15:47.237 2.690 - 2.702: 98.3681% ( 1) 00:15:47.237 2.714 - 2.726: 98.3753% ( 1) 00:15:47.237 2.773 - 2.785: 98.3825% ( 1) 00:15:47.237 2.785 - 2.797: 98.3896% ( 1) 00:15:47.237 2.797 - 2.809: 98.3968% ( 1) 00:15:47.237 2.809 - 2.821: 98.4040% ( 1) 00:15:47.237 2.868 - 2.880: 98.4112% ( 1) 00:15:47.237 2.892 - 2.904: 98.4184% ( 1) 00:15:47.237 2.939 - 2.951: 98.4256% ( 1) 00:15:47.237 2.951 - 2.963: 98.4472% ( 3) 00:15:47.237 2.963 - 2.975: 98.4543% ( 1) 00:15:47.237 3.022 - 3.034: 98.4759% ( 3) 00:15:47.237 3.034 - 3.058: 98.4903% ( 2) 00:15:47.237 3.058 - 3.081: 98.5047% ( 2) 00:15:47.237 3.081 - 3.105: 98.5119% ( 1) 00:15:47.237 3.105 - 3.129: 98.5262% ( 2) 00:15:47.237 3.129 - 3.153: 98.5334% ( 1) 00:15:47.237 3.153 - 3.176: 98.5478% ( 2) 00:15:47.237 3.176 - 3.200: 98.5622% ( 2) 00:15:47.237 3.271 - 3.295: 98.5766% ( 2) 00:15:47.237 3.319 - 3.342: 98.5981% ( 3) 00:15:47.237 3.342 - 3.366: 98.6125% ( 2) 00:15:47.237 3.461 - 3.484: 98.6197% ( 1) 00:15:47.237 3.484 - 3.508: 98.6341% ( 2) 00:15:47.237 3.508 - 3.532: 98.6485% ( 2) 00:15:47.237 3.532 - 3.556: 98.6556% ( 1) 00:15:47.237 3.556 - 3.579: 98.6700% ( 2) 00:15:47.237 3.579 - 3.603: 98.6916% ( 3) 00:15:47.237 3.603 - 3.627: 98.7132% ( 3) 00:15:47.237 3.627 - 3.650: 98.7275% ( 2) 00:15:47.237 3.650 - 3.674: 98.7563% ( 4) 00:15:47.237 3.674 - 3.698: 98.7707% ( 2) 00:15:47.237 3.698 - 3.721: 98.7779% ( 1) 00:15:47.237 3.721 - 3.745: 98.7850% ( 1) 00:15:47.237 3.769 - 3.793: 98.8066% ( 3) 00:15:47.237 3.793 - 3.816: 98.8282% ( 3) 00:15:47.237 3.816 - 3.840: 98.8426% ( 2) 00:15:47.237 3.864 - 3.887: 98.8497% ( 1) 00:15:47.237 3.887 - 3.911: 98.8569% ( 1) 00:15:47.237 4.006 - 4.030: 98.8713% ( 2) 00:15:47.237 4.172 - 4.196: 98.8785% ( 1) 00:15:47.237 5.404 - 5.428: 98.8857% ( 1) 00:15:47.237 5.547 - 5.570: 98.8929% ( 1) 00:15:47.237 5.855 - 5.879: 98.9001% ( 1) 00:15:47.237 5.950 - 5.973: 98.9073% ( 1) 00:15:47.237 6.210 - 6.258: 98.9145% ( 1) 00:15:47.237 6.684 - 6.732: 98.9216% ( 1) 00:15:47.237 6.827 - 6.874: 98.9288% ( 1) 00:15:47.237 7.016 - 7.064: 98.9360% ( 1) 00:15:47.237 7.064 - 7.111: 98.9432% ( 1) 00:15:47.237 7.301 - 7.348: 98.9504% ( 1) 00:15:47.237 7.490 - 7.538: 98.9576% ( 1) 00:15:47.237 8.012 - 8.059: 98.9648% ( 1) 00:15:47.237 8.486 - 8.533: 98.9720% ( 1) 00:15:47.237 9.150 - 9.197: 98.9792% ( 1) 00:15:47.237 13.748 - 13.843: 98.9863% ( 1) 00:15:47.237 14.222 - 14.317: 98.9935% ( 1) 00:15:47.237 15.170 - 15.265: 99.0007% ( 1) 00:15:47.237 15.360 - 15.455: 99.0079% ( 1) 00:15:47.237 15.455 - 15.550: 99.0223% ( 2) 00:15:47.237 15.644 - 15.739: 99.0582% ( 5) 00:15:47.237 15.739 - 15.834: 99.0654% ( 1) 00:15:47.237 15.834 - 15.929: 99.0870% ( 3) 00:15:47.237 15.929 - 16.024: 99.1229% ( 5) 00:15:47.237 16.024 - 16.119: 99.1445% ( 3) 00:15:47.237 16.119 - 16.213: 99.1804% ( 5) 00:15:47.237 16.213 - 16.308: 99.1948% ( 2) 00:15:47.237 16.308 - 16.403: 99.2308% ( 5) 00:15:47.237 16.403 - 16.498: 99.2523% ( 3) 00:15:47.237 16.498 - 16.593: 99.2667% ( 2) 00:15:47.237 16.593 - 16.687: 99.3098% ( 6) 00:15:47.237 16.687 - 16.782: 99.3458% ( 5) 00:15:47.237 16.782 - 16.877: 99.3746% ( 4) 00:15:47.237 16.877 - 16.972: 99.3889% ( 2) 00:15:47.237 16.972 - 17.067: 99.4105% ( 3) 00:15:47.237 17.067 - 17.161: 99.4249% ( 2) 00:15:47.237 17.256 - 17.351: 99.4321% ( 1) 00:15:47.237 17.541 - 17.636: 99.4393% ( 1) 00:15:47.237 17.636 - 17.730: 99.4536% ( 2) 00:15:47.237 18.015 - 18.110: 99.4608% ( 1) 00:15:47.237 18.299 - 18.394: 99.4680% ( 1) 00:15:47.237 18.679 - 18.773: 99.4752% ( 1) 00:15:47.237 34.133 - 34.323: 99.4824% ( 1) 00:15:47.237 34.323 - 34.513: 99.4896% ( 1) 00:15:47.237 3155.437 - 3179.710: 99.4968% ( 1) 00:15:47.237 3980.705 - 4004.978: 99.9497% ( 63) 00:15:47.237 4004.978 - 4029.250: 100.0000% ( 7) 00:15:47.237 00:15:47.237 19:24:45 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:47.237 19:24:45 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:47.237 19:24:45 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:47.237 19:24:45 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:47.237 19:24:45 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:47.495 [ 00:15:47.495 { 00:15:47.495 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:47.495 "subtype": "Discovery", 00:15:47.495 "listen_addresses": [], 00:15:47.495 "allow_any_host": true, 00:15:47.495 "hosts": [] 00:15:47.495 }, 00:15:47.495 { 00:15:47.495 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:47.495 "subtype": "NVMe", 00:15:47.495 "listen_addresses": [ 00:15:47.495 { 00:15:47.495 "transport": "VFIOUSER", 00:15:47.495 "trtype": "VFIOUSER", 00:15:47.495 "adrfam": "IPv4", 00:15:47.495 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:47.495 "trsvcid": "0" 00:15:47.495 } 00:15:47.495 ], 00:15:47.495 "allow_any_host": true, 00:15:47.495 "hosts": [], 00:15:47.495 "serial_number": "SPDK1", 00:15:47.495 "model_number": "SPDK bdev Controller", 00:15:47.495 "max_namespaces": 32, 00:15:47.495 "min_cntlid": 1, 00:15:47.495 "max_cntlid": 65519, 00:15:47.495 "namespaces": [ 00:15:47.495 { 00:15:47.495 "nsid": 1, 00:15:47.495 "bdev_name": "Malloc1", 00:15:47.495 "name": "Malloc1", 00:15:47.495 "nguid": "95ED9030A37E45C7BCC9B0860B819E8A", 00:15:47.495 "uuid": "95ed9030-a37e-45c7-bcc9-b0860b819e8a" 00:15:47.495 }, 00:15:47.495 { 00:15:47.495 "nsid": 2, 00:15:47.495 "bdev_name": "Malloc3", 00:15:47.495 "name": "Malloc3", 00:15:47.495 "nguid": "304D0B6675974B86B81067A511DF64E7", 00:15:47.495 "uuid": "304d0b66-7597-4b86-b810-67a511df64e7" 00:15:47.495 } 00:15:47.495 ] 00:15:47.495 }, 00:15:47.495 { 00:15:47.495 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:47.495 "subtype": "NVMe", 00:15:47.495 "listen_addresses": [ 00:15:47.495 { 00:15:47.495 "transport": "VFIOUSER", 00:15:47.495 "trtype": "VFIOUSER", 00:15:47.495 "adrfam": "IPv4", 00:15:47.495 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:47.495 "trsvcid": "0" 00:15:47.495 } 00:15:47.495 ], 00:15:47.495 "allow_any_host": true, 00:15:47.495 "hosts": [], 00:15:47.495 "serial_number": "SPDK2", 00:15:47.495 "model_number": "SPDK bdev Controller", 00:15:47.495 "max_namespaces": 32, 00:15:47.495 "min_cntlid": 1, 00:15:47.495 "max_cntlid": 65519, 00:15:47.495 "namespaces": [ 00:15:47.495 { 00:15:47.495 "nsid": 1, 00:15:47.495 "bdev_name": "Malloc2", 00:15:47.495 "name": "Malloc2", 00:15:47.495 "nguid": "7EAAB875929A44B7AD55E644D08D1269", 00:15:47.495 "uuid": "7eaab875-929a-44b7-ad55-e644d08d1269" 00:15:47.495 } 00:15:47.495 ] 00:15:47.495 } 00:15:47.495 ] 00:15:47.495 19:24:45 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:47.495 19:24:45 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1178054 00:15:47.495 19:24:45 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:47.496 19:24:45 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:47.496 19:24:45 -- common/autotest_common.sh@1254 -- # local i=0 00:15:47.496 19:24:45 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:47.496 19:24:45 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:47.496 19:24:45 -- common/autotest_common.sh@1265 -- # return 0 00:15:47.496 19:24:45 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:47.496 19:24:45 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:47.753 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.011 Malloc4 00:15:48.011 19:24:46 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:48.268 19:24:46 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:48.268 Asynchronous Event Request test 00:15:48.268 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.268 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.268 Registering asynchronous event callbacks... 00:15:48.268 Starting namespace attribute notice tests for all controllers... 00:15:48.268 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:48.268 aer_cb - Changed Namespace 00:15:48.268 Cleaning up... 00:15:48.527 [ 00:15:48.527 { 00:15:48.527 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:48.527 "subtype": "Discovery", 00:15:48.527 "listen_addresses": [], 00:15:48.527 "allow_any_host": true, 00:15:48.527 "hosts": [] 00:15:48.527 }, 00:15:48.527 { 00:15:48.527 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:48.527 "subtype": "NVMe", 00:15:48.527 "listen_addresses": [ 00:15:48.527 { 00:15:48.527 "transport": "VFIOUSER", 00:15:48.527 "trtype": "VFIOUSER", 00:15:48.527 "adrfam": "IPv4", 00:15:48.527 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:48.527 "trsvcid": "0" 00:15:48.527 } 00:15:48.527 ], 00:15:48.527 "allow_any_host": true, 00:15:48.527 "hosts": [], 00:15:48.527 "serial_number": "SPDK1", 00:15:48.527 "model_number": "SPDK bdev Controller", 00:15:48.527 "max_namespaces": 32, 00:15:48.527 "min_cntlid": 1, 00:15:48.527 "max_cntlid": 65519, 00:15:48.527 "namespaces": [ 00:15:48.527 { 00:15:48.527 "nsid": 1, 00:15:48.527 "bdev_name": "Malloc1", 00:15:48.527 "name": "Malloc1", 00:15:48.527 "nguid": "95ED9030A37E45C7BCC9B0860B819E8A", 00:15:48.527 "uuid": "95ed9030-a37e-45c7-bcc9-b0860b819e8a" 00:15:48.527 }, 00:15:48.527 { 00:15:48.527 "nsid": 2, 00:15:48.527 "bdev_name": "Malloc3", 00:15:48.527 "name": "Malloc3", 00:15:48.527 "nguid": "304D0B6675974B86B81067A511DF64E7", 00:15:48.527 "uuid": "304d0b66-7597-4b86-b810-67a511df64e7" 00:15:48.527 } 00:15:48.527 ] 00:15:48.527 }, 00:15:48.527 { 00:15:48.527 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:48.527 "subtype": "NVMe", 00:15:48.527 "listen_addresses": [ 00:15:48.527 { 00:15:48.527 "transport": "VFIOUSER", 00:15:48.527 "trtype": "VFIOUSER", 00:15:48.527 "adrfam": "IPv4", 00:15:48.527 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:48.527 "trsvcid": "0" 00:15:48.527 } 00:15:48.527 ], 00:15:48.527 "allow_any_host": true, 00:15:48.527 "hosts": [], 00:15:48.527 "serial_number": "SPDK2", 00:15:48.527 "model_number": "SPDK bdev Controller", 00:15:48.527 "max_namespaces": 32, 00:15:48.527 "min_cntlid": 1, 00:15:48.527 "max_cntlid": 65519, 00:15:48.527 "namespaces": [ 00:15:48.527 { 00:15:48.527 "nsid": 1, 00:15:48.527 "bdev_name": "Malloc2", 00:15:48.527 "name": "Malloc2", 00:15:48.527 "nguid": "7EAAB875929A44B7AD55E644D08D1269", 00:15:48.527 "uuid": "7eaab875-929a-44b7-ad55-e644d08d1269" 00:15:48.527 }, 00:15:48.527 { 00:15:48.527 "nsid": 2, 00:15:48.527 "bdev_name": "Malloc4", 00:15:48.527 "name": "Malloc4", 00:15:48.527 "nguid": "D9B5A1BF92674AB2888572EA743484F0", 00:15:48.527 "uuid": "d9b5a1bf-9267-4ab2-8885-72ea743484f0" 00:15:48.527 } 00:15:48.527 ] 00:15:48.527 } 00:15:48.527 ] 00:15:48.527 19:24:46 -- target/nvmf_vfio_user.sh@44 -- # wait 1178054 00:15:48.527 19:24:46 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:48.527 19:24:46 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1171658 00:15:48.527 19:24:46 -- common/autotest_common.sh@936 -- # '[' -z 1171658 ']' 00:15:48.527 19:24:46 -- common/autotest_common.sh@940 -- # kill -0 1171658 00:15:48.527 19:24:46 -- common/autotest_common.sh@941 -- # uname 00:15:48.527 19:24:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.527 19:24:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1171658 00:15:48.527 19:24:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.527 19:24:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.527 19:24:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1171658' 00:15:48.527 killing process with pid 1171658 00:15:48.527 19:24:46 -- common/autotest_common.sh@955 -- # kill 1171658 00:15:48.527 [2024-11-17 19:24:46.615971] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:48.527 19:24:46 -- common/autotest_common.sh@960 -- # wait 1171658 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1178221 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1178221' 00:15:48.786 Process pid: 1178221 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:48.786 19:24:46 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1178221 00:15:48.786 19:24:46 -- common/autotest_common.sh@829 -- # '[' -z 1178221 ']' 00:15:48.786 19:24:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.786 19:24:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.786 19:24:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.786 19:24:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.786 19:24:46 -- common/autotest_common.sh@10 -- # set +x 00:15:48.786 [2024-11-17 19:24:47.000883] thread.c:2929:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:48.786 [2024-11-17 19:24:47.001992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:48.786 [2024-11-17 19:24:47.002060] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.786 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.045 [2024-11-17 19:24:47.068699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.045 [2024-11-17 19:24:47.158069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:49.045 [2024-11-17 19:24:47.158244] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.045 [2024-11-17 19:24:47.158272] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.045 [2024-11-17 19:24:47.158289] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.045 [2024-11-17 19:24:47.158405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.045 [2024-11-17 19:24:47.158465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.045 [2024-11-17 19:24:47.158559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.045 [2024-11-17 19:24:47.158561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.045 [2024-11-17 19:24:47.265970] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:15:49.045 [2024-11-17 19:24:47.266252] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:15:49.045 [2024-11-17 19:24:47.266550] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:15:49.045 [2024-11-17 19:24:47.267235] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:49.045 [2024-11-17 19:24:47.267350] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:15:49.979 19:24:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.979 19:24:47 -- common/autotest_common.sh@862 -- # return 0 00:15:49.979 19:24:47 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:50.915 19:24:48 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:51.172 19:24:49 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:51.172 19:24:49 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:51.172 19:24:49 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:51.172 19:24:49 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:51.172 19:24:49 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:51.429 Malloc1 00:15:51.429 19:24:49 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:51.688 19:24:49 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:51.946 19:24:49 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:52.204 19:24:50 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:52.204 19:24:50 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:52.204 19:24:50 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:52.462 Malloc2 00:15:52.462 19:24:50 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:52.720 19:24:50 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:52.978 19:24:51 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:53.236 19:24:51 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:53.236 19:24:51 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1178221 00:15:53.236 19:24:51 -- common/autotest_common.sh@936 -- # '[' -z 1178221 ']' 00:15:53.236 19:24:51 -- common/autotest_common.sh@940 -- # kill -0 1178221 00:15:53.236 19:24:51 -- common/autotest_common.sh@941 -- # uname 00:15:53.236 19:24:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.236 19:24:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1178221 00:15:53.236 19:24:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:53.236 19:24:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:53.236 19:24:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1178221' 00:15:53.236 killing process with pid 1178221 00:15:53.236 19:24:51 -- common/autotest_common.sh@955 -- # kill 1178221 00:15:53.236 19:24:51 -- common/autotest_common.sh@960 -- # wait 1178221 00:15:53.495 19:24:51 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:53.495 19:24:51 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:53.495 00:15:53.495 real 0m54.182s 00:15:53.495 user 3m34.437s 00:15:53.495 sys 0m4.502s 00:15:53.495 19:24:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:53.495 19:24:51 -- common/autotest_common.sh@10 -- # set +x 00:15:53.495 ************************************ 00:15:53.495 END TEST nvmf_vfio_user 00:15:53.495 ************************************ 00:15:53.495 19:24:51 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:53.495 19:24:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:53.495 19:24:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.495 19:24:51 -- common/autotest_common.sh@10 -- # set +x 00:15:53.495 ************************************ 00:15:53.495 START TEST nvmf_vfio_user_nvme_compliance 00:15:53.495 ************************************ 00:15:53.495 19:24:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:53.495 * Looking for test storage... 00:15:53.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:53.495 19:24:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:53.495 19:24:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:53.495 19:24:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:53.756 19:24:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:53.756 19:24:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:53.756 19:24:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:53.756 19:24:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:53.756 19:24:51 -- scripts/common.sh@335 -- # IFS=.-: 00:15:53.756 19:24:51 -- scripts/common.sh@335 -- # read -ra ver1 00:15:53.756 19:24:51 -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.756 19:24:51 -- scripts/common.sh@336 -- # read -ra ver2 00:15:53.756 19:24:51 -- scripts/common.sh@337 -- # local 'op=<' 00:15:53.756 19:24:51 -- scripts/common.sh@339 -- # ver1_l=2 00:15:53.756 19:24:51 -- scripts/common.sh@340 -- # ver2_l=1 00:15:53.756 19:24:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:53.756 19:24:51 -- scripts/common.sh@343 -- # case "$op" in 00:15:53.756 19:24:51 -- scripts/common.sh@344 -- # : 1 00:15:53.756 19:24:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:53.756 19:24:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.756 19:24:51 -- scripts/common.sh@364 -- # decimal 1 00:15:53.756 19:24:51 -- scripts/common.sh@352 -- # local d=1 00:15:53.756 19:24:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.756 19:24:51 -- scripts/common.sh@354 -- # echo 1 00:15:53.756 19:24:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:53.756 19:24:51 -- scripts/common.sh@365 -- # decimal 2 00:15:53.756 19:24:51 -- scripts/common.sh@352 -- # local d=2 00:15:53.756 19:24:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.756 19:24:51 -- scripts/common.sh@354 -- # echo 2 00:15:53.756 19:24:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:53.756 19:24:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:53.756 19:24:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:53.756 19:24:51 -- scripts/common.sh@367 -- # return 0 00:15:53.756 19:24:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.756 19:24:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:53.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.756 --rc genhtml_branch_coverage=1 00:15:53.756 --rc genhtml_function_coverage=1 00:15:53.756 --rc genhtml_legend=1 00:15:53.756 --rc geninfo_all_blocks=1 00:15:53.756 --rc geninfo_unexecuted_blocks=1 00:15:53.756 00:15:53.756 ' 00:15:53.756 19:24:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:53.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.756 --rc genhtml_branch_coverage=1 00:15:53.756 --rc genhtml_function_coverage=1 00:15:53.756 --rc genhtml_legend=1 00:15:53.756 --rc geninfo_all_blocks=1 00:15:53.756 --rc geninfo_unexecuted_blocks=1 00:15:53.756 00:15:53.756 ' 00:15:53.756 19:24:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:53.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.756 --rc genhtml_branch_coverage=1 00:15:53.756 --rc genhtml_function_coverage=1 00:15:53.756 --rc genhtml_legend=1 00:15:53.756 --rc geninfo_all_blocks=1 00:15:53.756 --rc geninfo_unexecuted_blocks=1 00:15:53.756 00:15:53.756 ' 00:15:53.756 19:24:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:53.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.756 --rc genhtml_branch_coverage=1 00:15:53.756 --rc genhtml_function_coverage=1 00:15:53.756 --rc genhtml_legend=1 00:15:53.756 --rc geninfo_all_blocks=1 00:15:53.756 --rc geninfo_unexecuted_blocks=1 00:15:53.756 00:15:53.756 ' 00:15:53.756 19:24:51 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.756 19:24:51 -- nvmf/common.sh@7 -- # uname -s 00:15:53.756 19:24:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.756 19:24:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.756 19:24:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.756 19:24:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.756 19:24:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.756 19:24:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.756 19:24:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.756 19:24:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.756 19:24:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.756 19:24:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.756 19:24:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.756 19:24:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.756 19:24:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.756 19:24:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.756 19:24:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.756 19:24:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.756 19:24:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.756 19:24:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.757 19:24:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.757 19:24:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.757 19:24:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.757 19:24:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.757 19:24:51 -- paths/export.sh@5 -- # export PATH 00:15:53.757 19:24:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.757 19:24:51 -- nvmf/common.sh@46 -- # : 0 00:15:53.757 19:24:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:53.757 19:24:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:53.757 19:24:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:53.757 19:24:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.757 19:24:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.757 19:24:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:53.757 19:24:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:53.757 19:24:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:53.757 19:24:51 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.757 19:24:51 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.757 19:24:51 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:53.757 19:24:51 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:53.757 19:24:51 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:53.757 19:24:51 -- compliance/compliance.sh@20 -- # nvmfpid=1178947 00:15:53.757 19:24:51 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:53.757 19:24:51 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1178947' 00:15:53.757 Process pid: 1178947 00:15:53.757 19:24:51 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:53.757 19:24:51 -- compliance/compliance.sh@24 -- # waitforlisten 1178947 00:15:53.757 19:24:51 -- common/autotest_common.sh@829 -- # '[' -z 1178947 ']' 00:15:53.757 19:24:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.757 19:24:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.757 19:24:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.757 19:24:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.757 19:24:51 -- common/autotest_common.sh@10 -- # set +x 00:15:53.757 [2024-11-17 19:24:51.829403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:53.757 [2024-11-17 19:24:51.829500] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.757 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.757 [2024-11-17 19:24:51.891671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.757 [2024-11-17 19:24:51.985111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.757 [2024-11-17 19:24:51.985256] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.757 [2024-11-17 19:24:51.985275] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.757 [2024-11-17 19:24:51.985287] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.757 [2024-11-17 19:24:51.985343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.757 [2024-11-17 19:24:51.985406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.757 [2024-11-17 19:24:51.985409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.693 19:24:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.693 19:24:52 -- common/autotest_common.sh@862 -- # return 0 00:15:54.693 19:24:52 -- compliance/compliance.sh@26 -- # sleep 1 00:15:55.628 19:24:53 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:55.628 19:24:53 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:55.628 19:24:53 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:55.628 19:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.628 19:24:53 -- common/autotest_common.sh@10 -- # set +x 00:15:55.628 19:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.628 19:24:53 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:55.628 19:24:53 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:55.628 19:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.628 19:24:53 -- common/autotest_common.sh@10 -- # set +x 00:15:55.888 malloc0 00:15:55.888 19:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.888 19:24:53 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:55.888 19:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.888 19:24:53 -- common/autotest_common.sh@10 -- # set +x 00:15:55.888 19:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.888 19:24:53 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:55.888 19:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.888 19:24:53 -- common/autotest_common.sh@10 -- # set +x 00:15:55.888 19:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.888 19:24:53 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:55.888 19:24:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.888 19:24:53 -- common/autotest_common.sh@10 -- # set +x 00:15:55.888 19:24:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.888 19:24:53 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:55.888 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.888 00:15:55.888 00:15:55.888 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.888 http://cunit.sourceforge.net/ 00:15:55.888 00:15:55.888 00:15:55.888 Suite: nvme_compliance 00:15:55.888 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-17 19:24:54.104611] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:55.888 [2024-11-17 19:24:54.104693] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:55.888 [2024-11-17 19:24:54.104708] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:55.888 passed 00:15:56.147 Test: admin_identify_ctrlr_verify_fused ...passed 00:15:56.148 Test: admin_identify_ns ...[2024-11-17 19:24:54.344706] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:56.148 [2024-11-17 19:24:54.352701] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:56.148 passed 00:15:56.406 Test: admin_get_features_mandatory_features ...passed 00:15:56.406 Test: admin_get_features_optional_features ...passed 00:15:56.663 Test: admin_set_features_number_of_queues ...passed 00:15:56.663 Test: admin_get_log_page_mandatory_logs ...passed 00:15:56.923 Test: admin_get_log_page_with_lpo ...[2024-11-17 19:24:54.974704] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:56.923 passed 00:15:56.923 Test: fabric_property_get ...passed 00:15:56.923 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-17 19:24:55.161937] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:57.182 passed 00:15:57.182 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-17 19:24:55.335683] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:57.182 [2024-11-17 19:24:55.351692] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:57.182 passed 00:15:57.182 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-17 19:24:55.442923] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:57.442 passed 00:15:57.442 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-17 19:24:55.603686] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:57.442 [2024-11-17 19:24:55.627684] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:57.442 passed 00:15:57.700 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-17 19:24:55.718142] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:57.700 [2024-11-17 19:24:55.718183] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:57.700 passed 00:15:57.700 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-17 19:24:55.894688] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:57.700 [2024-11-17 19:24:55.902681] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:57.700 [2024-11-17 19:24:55.910680] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:57.700 [2024-11-17 19:24:55.918702] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:57.958 passed 00:15:57.958 Test: admin_create_io_sq_verify_pc ...[2024-11-17 19:24:56.047698] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:57.958 passed 00:15:59.332 Test: admin_create_io_qp_max_qps ...[2024-11-17 19:24:57.241696] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:59.590 passed 00:15:59.590 Test: admin_create_io_sq_shared_cq ...[2024-11-17 19:24:57.835686] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:59.850 passed 00:15:59.850 00:15:59.850 Run Summary: Type Total Ran Passed Failed Inactive 00:15:59.850 suites 1 1 n/a 0 0 00:15:59.850 tests 18 18 18 0 0 00:15:59.850 asserts 360 360 360 0 n/a 00:15:59.850 00:15:59.850 Elapsed time = 1.563 seconds 00:15:59.850 19:24:57 -- compliance/compliance.sh@42 -- # killprocess 1178947 00:15:59.850 19:24:57 -- common/autotest_common.sh@936 -- # '[' -z 1178947 ']' 00:15:59.850 19:24:57 -- common/autotest_common.sh@940 -- # kill -0 1178947 00:15:59.850 19:24:57 -- common/autotest_common.sh@941 -- # uname 00:15:59.850 19:24:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:59.850 19:24:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1178947 00:15:59.850 19:24:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:59.850 19:24:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:59.850 19:24:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1178947' 00:15:59.850 killing process with pid 1178947 00:15:59.850 19:24:57 -- common/autotest_common.sh@955 -- # kill 1178947 00:15:59.850 19:24:57 -- common/autotest_common.sh@960 -- # wait 1178947 00:16:00.110 19:24:58 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:00.110 19:24:58 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:00.110 00:16:00.110 real 0m6.563s 00:16:00.110 user 0m18.760s 00:16:00.110 sys 0m0.587s 00:16:00.110 19:24:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:00.110 19:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:00.110 ************************************ 00:16:00.110 END TEST nvmf_vfio_user_nvme_compliance 00:16:00.110 ************************************ 00:16:00.110 19:24:58 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:00.110 19:24:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:00.110 19:24:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:00.110 19:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:00.110 ************************************ 00:16:00.110 START TEST nvmf_vfio_user_fuzz 00:16:00.110 ************************************ 00:16:00.110 19:24:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:00.110 * Looking for test storage... 00:16:00.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.110 19:24:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:00.110 19:24:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:00.110 19:24:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:00.110 19:24:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:00.110 19:24:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:00.110 19:24:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:00.110 19:24:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:00.110 19:24:58 -- scripts/common.sh@335 -- # IFS=.-: 00:16:00.111 19:24:58 -- scripts/common.sh@335 -- # read -ra ver1 00:16:00.111 19:24:58 -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.111 19:24:58 -- scripts/common.sh@336 -- # read -ra ver2 00:16:00.111 19:24:58 -- scripts/common.sh@337 -- # local 'op=<' 00:16:00.111 19:24:58 -- scripts/common.sh@339 -- # ver1_l=2 00:16:00.111 19:24:58 -- scripts/common.sh@340 -- # ver2_l=1 00:16:00.111 19:24:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:00.111 19:24:58 -- scripts/common.sh@343 -- # case "$op" in 00:16:00.111 19:24:58 -- scripts/common.sh@344 -- # : 1 00:16:00.111 19:24:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:00.111 19:24:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.111 19:24:58 -- scripts/common.sh@364 -- # decimal 1 00:16:00.111 19:24:58 -- scripts/common.sh@352 -- # local d=1 00:16:00.111 19:24:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.111 19:24:58 -- scripts/common.sh@354 -- # echo 1 00:16:00.111 19:24:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:00.111 19:24:58 -- scripts/common.sh@365 -- # decimal 2 00:16:00.111 19:24:58 -- scripts/common.sh@352 -- # local d=2 00:16:00.111 19:24:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.111 19:24:58 -- scripts/common.sh@354 -- # echo 2 00:16:00.111 19:24:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:00.111 19:24:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:00.111 19:24:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:00.111 19:24:58 -- scripts/common.sh@367 -- # return 0 00:16:00.369 19:24:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.369 19:24:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:00.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.369 --rc genhtml_branch_coverage=1 00:16:00.369 --rc genhtml_function_coverage=1 00:16:00.369 --rc genhtml_legend=1 00:16:00.369 --rc geninfo_all_blocks=1 00:16:00.369 --rc geninfo_unexecuted_blocks=1 00:16:00.369 00:16:00.369 ' 00:16:00.369 19:24:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:00.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.369 --rc genhtml_branch_coverage=1 00:16:00.369 --rc genhtml_function_coverage=1 00:16:00.369 --rc genhtml_legend=1 00:16:00.369 --rc geninfo_all_blocks=1 00:16:00.369 --rc geninfo_unexecuted_blocks=1 00:16:00.369 00:16:00.370 ' 00:16:00.370 19:24:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.370 --rc genhtml_branch_coverage=1 00:16:00.370 --rc genhtml_function_coverage=1 00:16:00.370 --rc genhtml_legend=1 00:16:00.370 --rc geninfo_all_blocks=1 00:16:00.370 --rc geninfo_unexecuted_blocks=1 00:16:00.370 00:16:00.370 ' 00:16:00.370 19:24:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:00.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.370 --rc genhtml_branch_coverage=1 00:16:00.370 --rc genhtml_function_coverage=1 00:16:00.370 --rc genhtml_legend=1 00:16:00.370 --rc geninfo_all_blocks=1 00:16:00.370 --rc geninfo_unexecuted_blocks=1 00:16:00.370 00:16:00.370 ' 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.370 19:24:58 -- nvmf/common.sh@7 -- # uname -s 00:16:00.370 19:24:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.370 19:24:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.370 19:24:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.370 19:24:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.370 19:24:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.370 19:24:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.370 19:24:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.370 19:24:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.370 19:24:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.370 19:24:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.370 19:24:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.370 19:24:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.370 19:24:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.370 19:24:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.370 19:24:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.370 19:24:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.370 19:24:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.370 19:24:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.370 19:24:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.370 19:24:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.370 19:24:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.370 19:24:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.370 19:24:58 -- paths/export.sh@5 -- # export PATH 00:16:00.370 19:24:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.370 19:24:58 -- nvmf/common.sh@46 -- # : 0 00:16:00.370 19:24:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:00.370 19:24:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:00.370 19:24:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:00.370 19:24:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.370 19:24:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.370 19:24:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:00.370 19:24:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:00.370 19:24:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1179840 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1179840' 00:16:00.370 Process pid: 1179840 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:00.370 19:24:58 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1179840 00:16:00.370 19:24:58 -- common/autotest_common.sh@829 -- # '[' -z 1179840 ']' 00:16:00.370 19:24:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.370 19:24:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.370 19:24:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.370 19:24:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.370 19:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:00.628 19:24:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.628 19:24:58 -- common/autotest_common.sh@862 -- # return 0 00:16:00.628 19:24:58 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:01.561 19:24:59 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:01.561 19:24:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.561 19:24:59 -- common/autotest_common.sh@10 -- # set +x 00:16:01.562 19:24:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.562 19:24:59 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:01.562 19:24:59 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:01.562 19:24:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.562 19:24:59 -- common/autotest_common.sh@10 -- # set +x 00:16:01.562 malloc0 00:16:01.562 19:24:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.562 19:24:59 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:01.562 19:24:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.562 19:24:59 -- common/autotest_common.sh@10 -- # set +x 00:16:01.562 19:24:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.562 19:24:59 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:01.562 19:24:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.562 19:24:59 -- common/autotest_common.sh@10 -- # set +x 00:16:01.562 19:24:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.562 19:24:59 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:01.562 19:24:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.562 19:24:59 -- common/autotest_common.sh@10 -- # set +x 00:16:01.562 19:24:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.562 19:24:59 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:01.562 19:24:59 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:33.633 Fuzzing completed. Shutting down the fuzz application 00:16:33.633 00:16:33.633 Dumping successful admin opcodes: 00:16:33.633 8, 9, 10, 24, 00:16:33.633 Dumping successful io opcodes: 00:16:33.633 0, 00:16:33.633 NS: 0x200003a1ef00 I/O qp, Total commands completed: 563815, total successful commands: 2169, random_seed: 4290029056 00:16:33.633 NS: 0x200003a1ef00 admin qp, Total commands completed: 138098, total successful commands: 1118, random_seed: 2659978176 00:16:33.633 19:25:30 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:33.633 19:25:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.633 19:25:30 -- common/autotest_common.sh@10 -- # set +x 00:16:33.633 19:25:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.633 19:25:30 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1179840 00:16:33.633 19:25:30 -- common/autotest_common.sh@936 -- # '[' -z 1179840 ']' 00:16:33.633 19:25:30 -- common/autotest_common.sh@940 -- # kill -0 1179840 00:16:33.633 19:25:30 -- common/autotest_common.sh@941 -- # uname 00:16:33.633 19:25:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:33.633 19:25:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1179840 00:16:33.633 19:25:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:33.633 19:25:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:33.633 19:25:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1179840' 00:16:33.633 killing process with pid 1179840 00:16:33.633 19:25:30 -- common/autotest_common.sh@955 -- # kill 1179840 00:16:33.633 19:25:30 -- common/autotest_common.sh@960 -- # wait 1179840 00:16:33.633 19:25:30 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:33.633 19:25:30 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:33.633 00:16:33.633 real 0m32.328s 00:16:33.633 user 0m33.354s 00:16:33.633 sys 0m25.258s 00:16:33.633 19:25:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:33.633 19:25:30 -- common/autotest_common.sh@10 -- # set +x 00:16:33.633 ************************************ 00:16:33.633 END TEST nvmf_vfio_user_fuzz 00:16:33.633 ************************************ 00:16:33.633 19:25:30 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:33.633 19:25:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:33.633 19:25:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.633 19:25:30 -- common/autotest_common.sh@10 -- # set +x 00:16:33.633 ************************************ 00:16:33.633 START TEST nvmf_host_management 00:16:33.633 ************************************ 00:16:33.633 19:25:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:33.633 * Looking for test storage... 00:16:33.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.633 19:25:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:33.633 19:25:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:33.633 19:25:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:33.633 19:25:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:33.633 19:25:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:33.633 19:25:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:33.633 19:25:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:33.633 19:25:30 -- scripts/common.sh@335 -- # IFS=.-: 00:16:33.633 19:25:30 -- scripts/common.sh@335 -- # read -ra ver1 00:16:33.633 19:25:30 -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.633 19:25:30 -- scripts/common.sh@336 -- # read -ra ver2 00:16:33.633 19:25:30 -- scripts/common.sh@337 -- # local 'op=<' 00:16:33.633 19:25:30 -- scripts/common.sh@339 -- # ver1_l=2 00:16:33.633 19:25:30 -- scripts/common.sh@340 -- # ver2_l=1 00:16:33.633 19:25:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:33.633 19:25:30 -- scripts/common.sh@343 -- # case "$op" in 00:16:33.633 19:25:30 -- scripts/common.sh@344 -- # : 1 00:16:33.633 19:25:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:33.633 19:25:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.633 19:25:30 -- scripts/common.sh@364 -- # decimal 1 00:16:33.633 19:25:30 -- scripts/common.sh@352 -- # local d=1 00:16:33.633 19:25:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.633 19:25:30 -- scripts/common.sh@354 -- # echo 1 00:16:33.633 19:25:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:33.633 19:25:30 -- scripts/common.sh@365 -- # decimal 2 00:16:33.633 19:25:30 -- scripts/common.sh@352 -- # local d=2 00:16:33.633 19:25:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.633 19:25:30 -- scripts/common.sh@354 -- # echo 2 00:16:33.633 19:25:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:33.633 19:25:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:33.633 19:25:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:33.633 19:25:30 -- scripts/common.sh@367 -- # return 0 00:16:33.633 19:25:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.633 19:25:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:33.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.633 --rc genhtml_branch_coverage=1 00:16:33.633 --rc genhtml_function_coverage=1 00:16:33.633 --rc genhtml_legend=1 00:16:33.633 --rc geninfo_all_blocks=1 00:16:33.633 --rc geninfo_unexecuted_blocks=1 00:16:33.633 00:16:33.633 ' 00:16:33.633 19:25:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:33.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.633 --rc genhtml_branch_coverage=1 00:16:33.633 --rc genhtml_function_coverage=1 00:16:33.633 --rc genhtml_legend=1 00:16:33.633 --rc geninfo_all_blocks=1 00:16:33.633 --rc geninfo_unexecuted_blocks=1 00:16:33.633 00:16:33.633 ' 00:16:33.633 19:25:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:33.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.633 --rc genhtml_branch_coverage=1 00:16:33.633 --rc genhtml_function_coverage=1 00:16:33.633 --rc genhtml_legend=1 00:16:33.633 --rc geninfo_all_blocks=1 00:16:33.633 --rc geninfo_unexecuted_blocks=1 00:16:33.633 00:16:33.633 ' 00:16:33.633 19:25:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:33.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.633 --rc genhtml_branch_coverage=1 00:16:33.633 --rc genhtml_function_coverage=1 00:16:33.634 --rc genhtml_legend=1 00:16:33.634 --rc geninfo_all_blocks=1 00:16:33.634 --rc geninfo_unexecuted_blocks=1 00:16:33.634 00:16:33.634 ' 00:16:33.634 19:25:30 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.634 19:25:30 -- nvmf/common.sh@7 -- # uname -s 00:16:33.634 19:25:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.634 19:25:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.634 19:25:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.634 19:25:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.634 19:25:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.634 19:25:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.634 19:25:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.634 19:25:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.634 19:25:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.634 19:25:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.634 19:25:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.634 19:25:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.634 19:25:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.634 19:25:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.634 19:25:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.634 19:25:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.634 19:25:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.634 19:25:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.634 19:25:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.634 19:25:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.634 19:25:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.634 19:25:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.634 19:25:30 -- paths/export.sh@5 -- # export PATH 00:16:33.634 19:25:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.634 19:25:30 -- nvmf/common.sh@46 -- # : 0 00:16:33.634 19:25:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:33.634 19:25:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:33.634 19:25:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:33.634 19:25:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.634 19:25:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.634 19:25:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:33.634 19:25:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:33.634 19:25:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:33.634 19:25:30 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.634 19:25:30 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.634 19:25:30 -- target/host_management.sh@104 -- # nvmftestinit 00:16:33.634 19:25:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:33.634 19:25:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.634 19:25:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:33.634 19:25:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:33.634 19:25:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:33.634 19:25:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.634 19:25:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.634 19:25:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.634 19:25:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:33.634 19:25:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:33.634 19:25:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:33.634 19:25:30 -- common/autotest_common.sh@10 -- # set +x 00:16:35.010 19:25:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:35.010 19:25:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:35.010 19:25:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:35.010 19:25:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:35.010 19:25:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:35.010 19:25:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:35.010 19:25:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:35.010 19:25:32 -- nvmf/common.sh@294 -- # net_devs=() 00:16:35.010 19:25:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:35.010 19:25:32 -- nvmf/common.sh@295 -- # e810=() 00:16:35.010 19:25:32 -- nvmf/common.sh@295 -- # local -ga e810 00:16:35.010 19:25:32 -- nvmf/common.sh@296 -- # x722=() 00:16:35.010 19:25:32 -- nvmf/common.sh@296 -- # local -ga x722 00:16:35.010 19:25:32 -- nvmf/common.sh@297 -- # mlx=() 00:16:35.010 19:25:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:35.010 19:25:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.010 19:25:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:35.010 19:25:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:35.010 19:25:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:35.010 19:25:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:35.010 19:25:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:35.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:35.010 19:25:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:35.010 19:25:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:35.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:35.010 19:25:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:35.010 19:25:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:35.010 19:25:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.010 19:25:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:35.010 19:25:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.010 19:25:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:35.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:35.010 19:25:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.010 19:25:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:35.010 19:25:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.010 19:25:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:35.010 19:25:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.010 19:25:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:35.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:35.010 19:25:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.010 19:25:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:35.010 19:25:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:35.010 19:25:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:35.010 19:25:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.010 19:25:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.010 19:25:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.010 19:25:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:35.010 19:25:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.010 19:25:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.010 19:25:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:35.010 19:25:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.010 19:25:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.010 19:25:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:35.010 19:25:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:35.010 19:25:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.010 19:25:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.010 19:25:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.010 19:25:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.010 19:25:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:35.010 19:25:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.010 19:25:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.010 19:25:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.010 19:25:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:35.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:16:35.010 00:16:35.010 --- 10.0.0.2 ping statistics --- 00:16:35.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.010 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:35.010 19:25:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:16:35.010 00:16:35.010 --- 10.0.0.1 ping statistics --- 00:16:35.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.010 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:16:35.010 19:25:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.010 19:25:32 -- nvmf/common.sh@410 -- # return 0 00:16:35.010 19:25:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:35.010 19:25:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.010 19:25:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:35.010 19:25:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.010 19:25:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:35.010 19:25:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:35.010 19:25:33 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:35.010 19:25:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:35.010 19:25:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.010 19:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:35.010 ************************************ 00:16:35.010 START TEST nvmf_host_management 00:16:35.010 ************************************ 00:16:35.010 19:25:33 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:16:35.010 19:25:33 -- target/host_management.sh@69 -- # starttarget 00:16:35.010 19:25:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:35.010 19:25:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:35.010 19:25:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.010 19:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:35.010 19:25:33 -- nvmf/common.sh@469 -- # nvmfpid=1185423 00:16:35.010 19:25:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:35.010 19:25:33 -- nvmf/common.sh@470 -- # waitforlisten 1185423 00:16:35.011 19:25:33 -- common/autotest_common.sh@829 -- # '[' -z 1185423 ']' 00:16:35.011 19:25:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.011 19:25:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.011 19:25:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.011 19:25:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.011 19:25:33 -- common/autotest_common.sh@10 -- # set +x 00:16:35.011 [2024-11-17 19:25:33.059668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:35.011 [2024-11-17 19:25:33.059762] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.011 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.011 [2024-11-17 19:25:33.131245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.011 [2024-11-17 19:25:33.224267] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:35.011 [2024-11-17 19:25:33.224436] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.011 [2024-11-17 19:25:33.224458] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.011 [2024-11-17 19:25:33.224473] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.011 [2024-11-17 19:25:33.224570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.011 [2024-11-17 19:25:33.224596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.011 [2024-11-17 19:25:33.224618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:35.011 [2024-11-17 19:25:33.224620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.949 19:25:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.949 19:25:34 -- common/autotest_common.sh@862 -- # return 0 00:16:35.949 19:25:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:35.949 19:25:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:35.949 19:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:35.949 19:25:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.949 19:25:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.949 19:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.949 19:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:35.949 [2024-11-17 19:25:34.115596] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.949 19:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.949 19:25:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:35.949 19:25:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.949 19:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:35.949 19:25:34 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:35.949 19:25:34 -- target/host_management.sh@23 -- # cat 00:16:35.949 19:25:34 -- target/host_management.sh@30 -- # rpc_cmd 00:16:35.949 19:25:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.949 19:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:35.949 Malloc0 00:16:35.949 [2024-11-17 19:25:34.176630] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.949 19:25:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.949 19:25:34 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:35.949 19:25:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:35.949 19:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:35.949 19:25:34 -- target/host_management.sh@73 -- # perfpid=1185599 00:16:35.949 19:25:34 -- target/host_management.sh@74 -- # waitforlisten 1185599 /var/tmp/bdevperf.sock 00:16:35.949 19:25:34 -- common/autotest_common.sh@829 -- # '[' -z 1185599 ']' 00:16:35.949 19:25:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.949 19:25:34 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:35.949 19:25:34 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:35.949 19:25:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.949 19:25:34 -- nvmf/common.sh@520 -- # config=() 00:16:35.949 19:25:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.949 19:25:34 -- nvmf/common.sh@520 -- # local subsystem config 00:16:35.949 19:25:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.949 19:25:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:35.949 19:25:34 -- common/autotest_common.sh@10 -- # set +x 00:16:35.949 19:25:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:35.949 { 00:16:35.949 "params": { 00:16:35.949 "name": "Nvme$subsystem", 00:16:35.949 "trtype": "$TEST_TRANSPORT", 00:16:35.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.949 "adrfam": "ipv4", 00:16:35.949 "trsvcid": "$NVMF_PORT", 00:16:35.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.949 "hdgst": ${hdgst:-false}, 00:16:35.949 "ddgst": ${ddgst:-false} 00:16:35.949 }, 00:16:35.949 "method": "bdev_nvme_attach_controller" 00:16:35.949 } 00:16:35.949 EOF 00:16:35.949 )") 00:16:35.949 19:25:34 -- nvmf/common.sh@542 -- # cat 00:16:36.208 19:25:34 -- nvmf/common.sh@544 -- # jq . 00:16:36.208 19:25:34 -- nvmf/common.sh@545 -- # IFS=, 00:16:36.208 19:25:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:36.208 "params": { 00:16:36.208 "name": "Nvme0", 00:16:36.208 "trtype": "tcp", 00:16:36.208 "traddr": "10.0.0.2", 00:16:36.208 "adrfam": "ipv4", 00:16:36.208 "trsvcid": "4420", 00:16:36.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:36.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:36.208 "hdgst": false, 00:16:36.208 "ddgst": false 00:16:36.208 }, 00:16:36.208 "method": "bdev_nvme_attach_controller" 00:16:36.208 }' 00:16:36.208 [2024-11-17 19:25:34.252872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:36.208 [2024-11-17 19:25:34.252945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185599 ] 00:16:36.208 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.208 [2024-11-17 19:25:34.312919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.208 [2024-11-17 19:25:34.397931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.466 Running I/O for 10 seconds... 00:16:37.035 19:25:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.035 19:25:35 -- common/autotest_common.sh@862 -- # return 0 00:16:37.035 19:25:35 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:37.035 19:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.035 19:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:37.035 19:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.035 19:25:35 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:37.035 19:25:35 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:37.035 19:25:35 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:37.035 19:25:35 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:37.035 19:25:35 -- target/host_management.sh@52 -- # local ret=1 00:16:37.035 19:25:35 -- target/host_management.sh@53 -- # local i 00:16:37.035 19:25:35 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:37.035 19:25:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:37.035 19:25:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:37.035 19:25:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:37.035 19:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.035 19:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:37.035 19:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.035 19:25:35 -- target/host_management.sh@55 -- # read_io_count=2062 00:16:37.035 19:25:35 -- target/host_management.sh@58 -- # '[' 2062 -ge 100 ']' 00:16:37.035 19:25:35 -- target/host_management.sh@59 -- # ret=0 00:16:37.035 19:25:35 -- target/host_management.sh@60 -- # break 00:16:37.035 19:25:35 -- target/host_management.sh@64 -- # return 0 00:16:37.035 19:25:35 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:37.035 19:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.035 19:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:37.035 [2024-11-17 19:25:35.292398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.292913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5ad0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.293317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.035 [2024-11-17 19:25:35.293361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.035 [2024-11-17 19:25:35.293379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.035 [2024-11-17 19:25:35.293393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.035 [2024-11-17 19:25:35.293407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.035 [2024-11-17 19:25:35.293420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.035 [2024-11-17 19:25:35.293434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.035 [2024-11-17 19:25:35.293447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.035 [2024-11-17 19:25:35.293460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20801d0 is same with the state(5) to be set 00:16:37.035 [2024-11-17 19:25:35.293520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.035 [2024-11-17 19:25:35.293541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.035 [2024-11-17 19:25:35.293569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.035 [2024-11-17 19:25:35.293585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.035 [2024-11-17 19:25:35.293601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.293973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.293988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.036 [2024-11-17 19:25:35.294660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.036 [2024-11-17 19:25:35.294681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.294966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.294988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.037 [2024-11-17 19:25:35.295474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.037 [2024-11-17 19:25:35.295562] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x207a630 was disconnected and freed. reset controller. 00:16:37.037 [2024-11-17 19:25:35.296712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:37.037 19:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.037 19:25:35 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:37.037 19:25:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.037 19:25:35 -- common/autotest_common.sh@10 -- # set +x 00:16:37.037 task offset: 20224 on job bdev=Nvme0n1 fails 00:16:37.037 00:16:37.037 Latency(us) 00:16:37.037 [2024-11-17T18:25:35.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.037 [2024-11-17T18:25:35.304Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.037 [2024-11-17T18:25:35.304Z] Job: Nvme0n1 ended in about 0.56 seconds with error 00:16:37.037 Verification LBA range: start 0x0 length 0x400 00:16:37.037 Nvme0n1 : 0.56 3960.45 247.53 113.66 0.00 15451.80 2609.30 23592.96 00:16:37.037 [2024-11-17T18:25:35.304Z] =================================================================================================================== 00:16:37.037 [2024-11-17T18:25:35.305Z] Total : 3960.45 247.53 113.66 0.00 15451.80 2609.30 23592.96 00:16:37.038 [2024-11-17 19:25:35.298571] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:37.038 [2024-11-17 19:25:35.298599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20801d0 (9): Bad file descriptor 00:16:37.298 [2024-11-17 19:25:35.299967] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:37.298 [2024-11-17 19:25:35.300087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:37.298 [2024-11-17 19:25:35.300118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.298 [2024-11-17 19:25:35.300141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:37.298 [2024-11-17 19:25:35.300158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:37.298 [2024-11-17 19:25:35.300173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:37.298 [2024-11-17 19:25:35.300186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20801d0 00:16:37.298 [2024-11-17 19:25:35.300219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20801d0 (9): Bad file descriptor 00:16:37.298 [2024-11-17 19:25:35.300244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:37.298 [2024-11-17 19:25:35.300259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:37.298 [2024-11-17 19:25:35.300275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:37.298 [2024-11-17 19:25:35.300297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:37.298 19:25:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.298 19:25:35 -- target/host_management.sh@87 -- # sleep 1 00:16:38.235 19:25:36 -- target/host_management.sh@91 -- # kill -9 1185599 00:16:38.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1185599) - No such process 00:16:38.235 19:25:36 -- target/host_management.sh@91 -- # true 00:16:38.236 19:25:36 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:38.236 19:25:36 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:38.236 19:25:36 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:38.236 19:25:36 -- nvmf/common.sh@520 -- # config=() 00:16:38.236 19:25:36 -- nvmf/common.sh@520 -- # local subsystem config 00:16:38.236 19:25:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:38.236 19:25:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:38.236 { 00:16:38.236 "params": { 00:16:38.236 "name": "Nvme$subsystem", 00:16:38.236 "trtype": "$TEST_TRANSPORT", 00:16:38.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:38.236 "adrfam": "ipv4", 00:16:38.236 "trsvcid": "$NVMF_PORT", 00:16:38.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:38.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:38.236 "hdgst": ${hdgst:-false}, 00:16:38.236 "ddgst": ${ddgst:-false} 00:16:38.236 }, 00:16:38.236 "method": "bdev_nvme_attach_controller" 00:16:38.236 } 00:16:38.236 EOF 00:16:38.236 )") 00:16:38.236 19:25:36 -- nvmf/common.sh@542 -- # cat 00:16:38.236 19:25:36 -- nvmf/common.sh@544 -- # jq . 00:16:38.236 19:25:36 -- nvmf/common.sh@545 -- # IFS=, 00:16:38.236 19:25:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:38.236 "params": { 00:16:38.236 "name": "Nvme0", 00:16:38.236 "trtype": "tcp", 00:16:38.236 "traddr": "10.0.0.2", 00:16:38.236 "adrfam": "ipv4", 00:16:38.236 "trsvcid": "4420", 00:16:38.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:38.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:38.236 "hdgst": false, 00:16:38.236 "ddgst": false 00:16:38.236 }, 00:16:38.236 "method": "bdev_nvme_attach_controller" 00:16:38.236 }' 00:16:38.236 [2024-11-17 19:25:36.348224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:38.236 [2024-11-17 19:25:36.348300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185885 ] 00:16:38.236 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.236 [2024-11-17 19:25:36.408144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.236 [2024-11-17 19:25:36.494411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.806 Running I/O for 1 seconds... 00:16:39.745 00:16:39.745 Latency(us) 00:16:39.745 [2024-11-17T18:25:38.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.745 [2024-11-17T18:25:38.012Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.745 Verification LBA range: start 0x0 length 0x400 00:16:39.745 Nvme0n1 : 1.01 3998.76 249.92 0.00 0.00 15746.34 1304.65 22816.24 00:16:39.745 [2024-11-17T18:25:38.012Z] =================================================================================================================== 00:16:39.745 [2024-11-17T18:25:38.012Z] Total : 3998.76 249.92 0.00 0.00 15746.34 1304.65 22816.24 00:16:40.037 19:25:38 -- target/host_management.sh@101 -- # stoptarget 00:16:40.037 19:25:38 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:40.037 19:25:38 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:40.037 19:25:38 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:40.037 19:25:38 -- target/host_management.sh@40 -- # nvmftestfini 00:16:40.037 19:25:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:40.037 19:25:38 -- nvmf/common.sh@116 -- # sync 00:16:40.037 19:25:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:40.037 19:25:38 -- nvmf/common.sh@119 -- # set +e 00:16:40.037 19:25:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:40.037 19:25:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:40.037 rmmod nvme_tcp 00:16:40.037 rmmod nvme_fabrics 00:16:40.037 rmmod nvme_keyring 00:16:40.037 19:25:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:40.037 19:25:38 -- nvmf/common.sh@123 -- # set -e 00:16:40.037 19:25:38 -- nvmf/common.sh@124 -- # return 0 00:16:40.037 19:25:38 -- nvmf/common.sh@477 -- # '[' -n 1185423 ']' 00:16:40.037 19:25:38 -- nvmf/common.sh@478 -- # killprocess 1185423 00:16:40.037 19:25:38 -- common/autotest_common.sh@936 -- # '[' -z 1185423 ']' 00:16:40.037 19:25:38 -- common/autotest_common.sh@940 -- # kill -0 1185423 00:16:40.037 19:25:38 -- common/autotest_common.sh@941 -- # uname 00:16:40.037 19:25:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.037 19:25:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1185423 00:16:40.037 19:25:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.037 19:25:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.037 19:25:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1185423' 00:16:40.037 killing process with pid 1185423 00:16:40.037 19:25:38 -- common/autotest_common.sh@955 -- # kill 1185423 00:16:40.037 19:25:38 -- common/autotest_common.sh@960 -- # wait 1185423 00:16:40.299 [2024-11-17 19:25:38.360780] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:40.299 19:25:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:40.299 19:25:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:40.299 19:25:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:40.299 19:25:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.299 19:25:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:40.299 19:25:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.299 19:25:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.299 19:25:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.204 19:25:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:42.204 00:16:42.204 real 0m7.417s 00:16:42.204 user 0m23.312s 00:16:42.204 sys 0m1.442s 00:16:42.204 19:25:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:42.204 19:25:40 -- common/autotest_common.sh@10 -- # set +x 00:16:42.204 ************************************ 00:16:42.204 END TEST nvmf_host_management 00:16:42.204 ************************************ 00:16:42.204 19:25:40 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:42.204 00:16:42.204 real 0m9.868s 00:16:42.204 user 0m24.204s 00:16:42.204 sys 0m3.028s 00:16:42.204 19:25:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:42.204 19:25:40 -- common/autotest_common.sh@10 -- # set +x 00:16:42.204 ************************************ 00:16:42.204 END TEST nvmf_host_management 00:16:42.204 ************************************ 00:16:42.462 19:25:40 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:42.462 19:25:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:42.462 19:25:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.462 19:25:40 -- common/autotest_common.sh@10 -- # set +x 00:16:42.462 ************************************ 00:16:42.462 START TEST nvmf_lvol 00:16:42.462 ************************************ 00:16:42.462 19:25:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:42.462 * Looking for test storage... 00:16:42.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.462 19:25:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:42.462 19:25:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:42.462 19:25:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:42.462 19:25:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:42.462 19:25:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:42.462 19:25:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:42.462 19:25:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:42.462 19:25:40 -- scripts/common.sh@335 -- # IFS=.-: 00:16:42.462 19:25:40 -- scripts/common.sh@335 -- # read -ra ver1 00:16:42.462 19:25:40 -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.462 19:25:40 -- scripts/common.sh@336 -- # read -ra ver2 00:16:42.462 19:25:40 -- scripts/common.sh@337 -- # local 'op=<' 00:16:42.462 19:25:40 -- scripts/common.sh@339 -- # ver1_l=2 00:16:42.462 19:25:40 -- scripts/common.sh@340 -- # ver2_l=1 00:16:42.462 19:25:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:42.462 19:25:40 -- scripts/common.sh@343 -- # case "$op" in 00:16:42.462 19:25:40 -- scripts/common.sh@344 -- # : 1 00:16:42.462 19:25:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:42.462 19:25:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.462 19:25:40 -- scripts/common.sh@364 -- # decimal 1 00:16:42.462 19:25:40 -- scripts/common.sh@352 -- # local d=1 00:16:42.462 19:25:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.462 19:25:40 -- scripts/common.sh@354 -- # echo 1 00:16:42.462 19:25:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:42.462 19:25:40 -- scripts/common.sh@365 -- # decimal 2 00:16:42.462 19:25:40 -- scripts/common.sh@352 -- # local d=2 00:16:42.462 19:25:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.462 19:25:40 -- scripts/common.sh@354 -- # echo 2 00:16:42.462 19:25:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:42.462 19:25:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:42.462 19:25:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:42.462 19:25:40 -- scripts/common.sh@367 -- # return 0 00:16:42.462 19:25:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.462 19:25:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:42.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.462 --rc genhtml_branch_coverage=1 00:16:42.462 --rc genhtml_function_coverage=1 00:16:42.462 --rc genhtml_legend=1 00:16:42.462 --rc geninfo_all_blocks=1 00:16:42.462 --rc geninfo_unexecuted_blocks=1 00:16:42.462 00:16:42.462 ' 00:16:42.462 19:25:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:42.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.462 --rc genhtml_branch_coverage=1 00:16:42.462 --rc genhtml_function_coverage=1 00:16:42.462 --rc genhtml_legend=1 00:16:42.462 --rc geninfo_all_blocks=1 00:16:42.462 --rc geninfo_unexecuted_blocks=1 00:16:42.462 00:16:42.462 ' 00:16:42.462 19:25:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:42.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.462 --rc genhtml_branch_coverage=1 00:16:42.462 --rc genhtml_function_coverage=1 00:16:42.462 --rc genhtml_legend=1 00:16:42.462 --rc geninfo_all_blocks=1 00:16:42.462 --rc geninfo_unexecuted_blocks=1 00:16:42.462 00:16:42.462 ' 00:16:42.462 19:25:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:42.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.462 --rc genhtml_branch_coverage=1 00:16:42.462 --rc genhtml_function_coverage=1 00:16:42.462 --rc genhtml_legend=1 00:16:42.462 --rc geninfo_all_blocks=1 00:16:42.462 --rc geninfo_unexecuted_blocks=1 00:16:42.462 00:16:42.462 ' 00:16:42.462 19:25:40 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.462 19:25:40 -- nvmf/common.sh@7 -- # uname -s 00:16:42.462 19:25:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.462 19:25:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.462 19:25:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.462 19:25:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.462 19:25:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.462 19:25:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.462 19:25:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.462 19:25:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.462 19:25:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.462 19:25:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.462 19:25:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.462 19:25:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.462 19:25:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.462 19:25:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.462 19:25:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.462 19:25:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.462 19:25:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.462 19:25:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.462 19:25:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.462 19:25:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.462 19:25:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.462 19:25:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.462 19:25:40 -- paths/export.sh@5 -- # export PATH 00:16:42.462 19:25:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.462 19:25:40 -- nvmf/common.sh@46 -- # : 0 00:16:42.463 19:25:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:42.463 19:25:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:42.463 19:25:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:42.463 19:25:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.463 19:25:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.463 19:25:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:42.463 19:25:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:42.463 19:25:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:42.463 19:25:40 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.463 19:25:40 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.463 19:25:40 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:42.463 19:25:40 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:42.463 19:25:40 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:42.463 19:25:40 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:42.463 19:25:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:42.463 19:25:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.463 19:25:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:42.463 19:25:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:42.463 19:25:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:42.463 19:25:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.463 19:25:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.463 19:25:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.463 19:25:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:42.463 19:25:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:42.463 19:25:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:42.463 19:25:40 -- common/autotest_common.sh@10 -- # set +x 00:16:44.369 19:25:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:44.369 19:25:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:44.369 19:25:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:44.369 19:25:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:44.369 19:25:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:44.369 19:25:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:44.369 19:25:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:44.369 19:25:42 -- nvmf/common.sh@294 -- # net_devs=() 00:16:44.369 19:25:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:44.369 19:25:42 -- nvmf/common.sh@295 -- # e810=() 00:16:44.369 19:25:42 -- nvmf/common.sh@295 -- # local -ga e810 00:16:44.369 19:25:42 -- nvmf/common.sh@296 -- # x722=() 00:16:44.369 19:25:42 -- nvmf/common.sh@296 -- # local -ga x722 00:16:44.369 19:25:42 -- nvmf/common.sh@297 -- # mlx=() 00:16:44.369 19:25:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:44.369 19:25:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.369 19:25:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:44.369 19:25:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:44.369 19:25:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:44.369 19:25:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:44.369 19:25:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:44.369 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:44.369 19:25:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:44.369 19:25:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:44.369 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:44.369 19:25:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:44.369 19:25:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:44.369 19:25:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.369 19:25:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:44.369 19:25:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.369 19:25:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:44.369 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:44.369 19:25:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.369 19:25:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:44.369 19:25:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.369 19:25:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:44.369 19:25:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.369 19:25:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:44.369 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:44.369 19:25:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.369 19:25:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:44.369 19:25:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:44.369 19:25:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:44.369 19:25:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:44.369 19:25:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.369 19:25:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.369 19:25:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.369 19:25:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:44.369 19:25:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.369 19:25:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.369 19:25:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:44.369 19:25:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.369 19:25:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.369 19:25:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:44.369 19:25:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:44.369 19:25:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.369 19:25:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.627 19:25:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.627 19:25:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.627 19:25:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:44.627 19:25:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.627 19:25:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.627 19:25:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.627 19:25:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:44.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:16:44.627 00:16:44.627 --- 10.0.0.2 ping statistics --- 00:16:44.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.627 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:16:44.627 19:25:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:16:44.627 00:16:44.627 --- 10.0.0.1 ping statistics --- 00:16:44.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.627 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:44.627 19:25:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.627 19:25:42 -- nvmf/common.sh@410 -- # return 0 00:16:44.627 19:25:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:44.627 19:25:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.627 19:25:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:44.627 19:25:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:44.627 19:25:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.627 19:25:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:44.627 19:25:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:44.627 19:25:42 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:44.627 19:25:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:44.627 19:25:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.627 19:25:42 -- common/autotest_common.sh@10 -- # set +x 00:16:44.627 19:25:42 -- nvmf/common.sh@469 -- # nvmfpid=1188008 00:16:44.627 19:25:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:44.627 19:25:42 -- nvmf/common.sh@470 -- # waitforlisten 1188008 00:16:44.627 19:25:42 -- common/autotest_common.sh@829 -- # '[' -z 1188008 ']' 00:16:44.627 19:25:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.627 19:25:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.627 19:25:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.627 19:25:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.627 19:25:42 -- common/autotest_common.sh@10 -- # set +x 00:16:44.627 [2024-11-17 19:25:42.819705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:44.627 [2024-11-17 19:25:42.819794] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.627 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.627 [2024-11-17 19:25:42.888198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:44.885 [2024-11-17 19:25:42.981680] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:44.886 [2024-11-17 19:25:42.981827] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.886 [2024-11-17 19:25:42.981844] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.886 [2024-11-17 19:25:42.981857] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.886 [2024-11-17 19:25:42.981917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.886 [2024-11-17 19:25:42.981967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.886 [2024-11-17 19:25:42.981970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.819 19:25:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.819 19:25:43 -- common/autotest_common.sh@862 -- # return 0 00:16:45.819 19:25:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:45.819 19:25:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.819 19:25:43 -- common/autotest_common.sh@10 -- # set +x 00:16:45.819 19:25:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.819 19:25:43 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:45.819 [2024-11-17 19:25:44.032612] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.819 19:25:44 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.387 19:25:44 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:46.387 19:25:44 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.387 19:25:44 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:46.387 19:25:44 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:46.645 19:25:44 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:46.903 19:25:45 -- target/nvmf_lvol.sh@29 -- # lvs=93e42734-dea3-4548-9079-f85a67570996 00:16:46.903 19:25:45 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 93e42734-dea3-4548-9079-f85a67570996 lvol 20 00:16:47.161 19:25:45 -- target/nvmf_lvol.sh@32 -- # lvol=07893659-27e0-43c8-b728-938114db892e 00:16:47.161 19:25:45 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:47.419 19:25:45 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 07893659-27e0-43c8-b728-938114db892e 00:16:47.677 19:25:45 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:47.934 [2024-11-17 19:25:46.131653] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.934 19:25:46 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:48.191 19:25:46 -- target/nvmf_lvol.sh@42 -- # perf_pid=1188573 00:16:48.191 19:25:46 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:48.191 19:25:46 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:48.191 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.567 19:25:47 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 07893659-27e0-43c8-b728-938114db892e MY_SNAPSHOT 00:16:49.567 19:25:47 -- target/nvmf_lvol.sh@47 -- # snapshot=11e7f6c6-d5b0-4d1e-9534-77bdb9a6ab4b 00:16:49.567 19:25:47 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 07893659-27e0-43c8-b728-938114db892e 30 00:16:49.824 19:25:48 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 11e7f6c6-d5b0-4d1e-9534-77bdb9a6ab4b MY_CLONE 00:16:50.081 19:25:48 -- target/nvmf_lvol.sh@49 -- # clone=9b062b43-44fc-4491-a92f-565413b0978d 00:16:50.081 19:25:48 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9b062b43-44fc-4491-a92f-565413b0978d 00:16:50.648 19:25:48 -- target/nvmf_lvol.sh@53 -- # wait 1188573 00:16:58.767 Initializing NVMe Controllers 00:16:58.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:58.767 Controller IO queue size 128, less than required. 00:16:58.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:58.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:58.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:58.767 Initialization complete. Launching workers. 00:16:58.767 ======================================================== 00:16:58.767 Latency(us) 00:16:58.767 Device Information : IOPS MiB/s Average min max 00:16:58.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10736.20 41.94 11925.74 2090.28 59662.16 00:16:58.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10789.30 42.15 11869.93 2112.78 52649.37 00:16:58.767 ======================================================== 00:16:58.767 Total : 21525.50 84.08 11897.77 2090.28 59662.16 00:16:58.767 00:16:58.767 19:25:56 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:58.767 19:25:57 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 07893659-27e0-43c8-b728-938114db892e 00:16:59.025 19:25:57 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 93e42734-dea3-4548-9079-f85a67570996 00:16:59.295 19:25:57 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:59.295 19:25:57 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:59.295 19:25:57 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:59.295 19:25:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:59.295 19:25:57 -- nvmf/common.sh@116 -- # sync 00:16:59.295 19:25:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:59.295 19:25:57 -- nvmf/common.sh@119 -- # set +e 00:16:59.295 19:25:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:59.295 19:25:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:59.295 rmmod nvme_tcp 00:16:59.295 rmmod nvme_fabrics 00:16:59.295 rmmod nvme_keyring 00:16:59.567 19:25:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:59.567 19:25:57 -- nvmf/common.sh@123 -- # set -e 00:16:59.567 19:25:57 -- nvmf/common.sh@124 -- # return 0 00:16:59.567 19:25:57 -- nvmf/common.sh@477 -- # '[' -n 1188008 ']' 00:16:59.567 19:25:57 -- nvmf/common.sh@478 -- # killprocess 1188008 00:16:59.567 19:25:57 -- common/autotest_common.sh@936 -- # '[' -z 1188008 ']' 00:16:59.567 19:25:57 -- common/autotest_common.sh@940 -- # kill -0 1188008 00:16:59.567 19:25:57 -- common/autotest_common.sh@941 -- # uname 00:16:59.567 19:25:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:59.567 19:25:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1188008 00:16:59.567 19:25:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:59.567 19:25:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:59.567 19:25:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1188008' 00:16:59.567 killing process with pid 1188008 00:16:59.567 19:25:57 -- common/autotest_common.sh@955 -- # kill 1188008 00:16:59.567 19:25:57 -- common/autotest_common.sh@960 -- # wait 1188008 00:16:59.828 19:25:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:59.828 19:25:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:59.828 19:25:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:59.828 19:25:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.828 19:25:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:59.828 19:25:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.828 19:25:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.828 19:25:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.734 19:25:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:01.734 00:17:01.734 real 0m19.452s 00:17:01.734 user 1m6.404s 00:17:01.734 sys 0m5.421s 00:17:01.734 19:25:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:01.734 19:25:59 -- common/autotest_common.sh@10 -- # set +x 00:17:01.734 ************************************ 00:17:01.734 END TEST nvmf_lvol 00:17:01.734 ************************************ 00:17:01.735 19:25:59 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:01.735 19:25:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:01.735 19:25:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.735 19:25:59 -- common/autotest_common.sh@10 -- # set +x 00:17:01.735 ************************************ 00:17:01.735 START TEST nvmf_lvs_grow 00:17:01.735 ************************************ 00:17:01.735 19:25:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:01.735 * Looking for test storage... 00:17:01.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.735 19:25:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:01.735 19:26:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:01.993 19:26:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:01.993 19:26:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:01.993 19:26:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:01.993 19:26:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:01.993 19:26:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:01.993 19:26:00 -- scripts/common.sh@335 -- # IFS=.-: 00:17:01.993 19:26:00 -- scripts/common.sh@335 -- # read -ra ver1 00:17:01.993 19:26:00 -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.993 19:26:00 -- scripts/common.sh@336 -- # read -ra ver2 00:17:01.993 19:26:00 -- scripts/common.sh@337 -- # local 'op=<' 00:17:01.993 19:26:00 -- scripts/common.sh@339 -- # ver1_l=2 00:17:01.993 19:26:00 -- scripts/common.sh@340 -- # ver2_l=1 00:17:01.993 19:26:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:01.993 19:26:00 -- scripts/common.sh@343 -- # case "$op" in 00:17:01.993 19:26:00 -- scripts/common.sh@344 -- # : 1 00:17:01.993 19:26:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:01.993 19:26:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.993 19:26:00 -- scripts/common.sh@364 -- # decimal 1 00:17:01.993 19:26:00 -- scripts/common.sh@352 -- # local d=1 00:17:01.993 19:26:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.993 19:26:00 -- scripts/common.sh@354 -- # echo 1 00:17:01.993 19:26:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:01.993 19:26:00 -- scripts/common.sh@365 -- # decimal 2 00:17:01.993 19:26:00 -- scripts/common.sh@352 -- # local d=2 00:17:01.993 19:26:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.993 19:26:00 -- scripts/common.sh@354 -- # echo 2 00:17:01.993 19:26:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:01.993 19:26:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:01.993 19:26:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:01.993 19:26:00 -- scripts/common.sh@367 -- # return 0 00:17:01.993 19:26:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.993 19:26:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:01.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.993 --rc genhtml_branch_coverage=1 00:17:01.993 --rc genhtml_function_coverage=1 00:17:01.993 --rc genhtml_legend=1 00:17:01.993 --rc geninfo_all_blocks=1 00:17:01.993 --rc geninfo_unexecuted_blocks=1 00:17:01.993 00:17:01.993 ' 00:17:01.993 19:26:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:01.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.993 --rc genhtml_branch_coverage=1 00:17:01.993 --rc genhtml_function_coverage=1 00:17:01.993 --rc genhtml_legend=1 00:17:01.993 --rc geninfo_all_blocks=1 00:17:01.993 --rc geninfo_unexecuted_blocks=1 00:17:01.993 00:17:01.993 ' 00:17:01.993 19:26:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:01.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.993 --rc genhtml_branch_coverage=1 00:17:01.993 --rc genhtml_function_coverage=1 00:17:01.993 --rc genhtml_legend=1 00:17:01.993 --rc geninfo_all_blocks=1 00:17:01.993 --rc geninfo_unexecuted_blocks=1 00:17:01.993 00:17:01.993 ' 00:17:01.993 19:26:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:01.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.993 --rc genhtml_branch_coverage=1 00:17:01.993 --rc genhtml_function_coverage=1 00:17:01.993 --rc genhtml_legend=1 00:17:01.993 --rc geninfo_all_blocks=1 00:17:01.993 --rc geninfo_unexecuted_blocks=1 00:17:01.993 00:17:01.993 ' 00:17:01.993 19:26:00 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.993 19:26:00 -- nvmf/common.sh@7 -- # uname -s 00:17:01.993 19:26:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.993 19:26:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.993 19:26:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.993 19:26:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.993 19:26:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.993 19:26:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.993 19:26:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.993 19:26:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.993 19:26:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.993 19:26:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.993 19:26:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.993 19:26:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.993 19:26:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.993 19:26:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.993 19:26:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.993 19:26:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.993 19:26:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.993 19:26:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.993 19:26:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.993 19:26:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.993 19:26:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.993 19:26:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.993 19:26:00 -- paths/export.sh@5 -- # export PATH 00:17:01.994 19:26:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.994 19:26:00 -- nvmf/common.sh@46 -- # : 0 00:17:01.994 19:26:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:01.994 19:26:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:01.994 19:26:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:01.994 19:26:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.994 19:26:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.994 19:26:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:01.994 19:26:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:01.994 19:26:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:01.994 19:26:00 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.994 19:26:00 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.994 19:26:00 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:01.994 19:26:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:01.994 19:26:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.994 19:26:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:01.994 19:26:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:01.994 19:26:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:01.994 19:26:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.994 19:26:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.994 19:26:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.994 19:26:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:01.994 19:26:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:01.994 19:26:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:01.994 19:26:00 -- common/autotest_common.sh@10 -- # set +x 00:17:03.903 19:26:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:03.903 19:26:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:03.903 19:26:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:03.903 19:26:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:03.903 19:26:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:03.903 19:26:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:03.903 19:26:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:03.903 19:26:02 -- nvmf/common.sh@294 -- # net_devs=() 00:17:03.903 19:26:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:03.903 19:26:02 -- nvmf/common.sh@295 -- # e810=() 00:17:03.903 19:26:02 -- nvmf/common.sh@295 -- # local -ga e810 00:17:03.903 19:26:02 -- nvmf/common.sh@296 -- # x722=() 00:17:03.903 19:26:02 -- nvmf/common.sh@296 -- # local -ga x722 00:17:03.903 19:26:02 -- nvmf/common.sh@297 -- # mlx=() 00:17:03.903 19:26:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:03.903 19:26:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.903 19:26:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:03.903 19:26:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:03.903 19:26:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:03.903 19:26:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:03.903 19:26:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:03.903 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:03.903 19:26:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:03.903 19:26:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:03.903 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:03.903 19:26:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:03.903 19:26:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:03.903 19:26:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.903 19:26:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:03.903 19:26:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.903 19:26:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:03.903 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:03.903 19:26:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.903 19:26:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:03.903 19:26:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.903 19:26:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:03.903 19:26:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.903 19:26:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:03.903 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:03.903 19:26:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.903 19:26:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:03.903 19:26:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:03.903 19:26:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:03.903 19:26:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:03.903 19:26:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.903 19:26:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.903 19:26:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.903 19:26:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:03.903 19:26:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.903 19:26:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.903 19:26:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:03.903 19:26:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.903 19:26:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.903 19:26:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:03.903 19:26:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:03.903 19:26:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.903 19:26:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.903 19:26:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.904 19:26:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.904 19:26:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:03.904 19:26:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.904 19:26:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.904 19:26:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.904 19:26:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:03.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:17:03.904 00:17:03.904 --- 10.0.0.2 ping statistics --- 00:17:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.904 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:03.904 19:26:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:17:03.904 00:17:03.904 --- 10.0.0.1 ping statistics --- 00:17:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.904 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:03.904 19:26:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.904 19:26:02 -- nvmf/common.sh@410 -- # return 0 00:17:03.904 19:26:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:03.904 19:26:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.904 19:26:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:03.904 19:26:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:03.904 19:26:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.904 19:26:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:03.904 19:26:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:04.162 19:26:02 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:04.162 19:26:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:04.162 19:26:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.162 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:17:04.162 19:26:02 -- nvmf/common.sh@469 -- # nvmfpid=1191791 00:17:04.162 19:26:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:04.162 19:26:02 -- nvmf/common.sh@470 -- # waitforlisten 1191791 00:17:04.162 19:26:02 -- common/autotest_common.sh@829 -- # '[' -z 1191791 ']' 00:17:04.162 19:26:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.162 19:26:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.162 19:26:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.162 19:26:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.162 19:26:02 -- common/autotest_common.sh@10 -- # set +x 00:17:04.162 [2024-11-17 19:26:02.236231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:04.162 [2024-11-17 19:26:02.236334] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.162 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.162 [2024-11-17 19:26:02.305946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.162 [2024-11-17 19:26:02.394093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:04.162 [2024-11-17 19:26:02.394289] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.162 [2024-11-17 19:26:02.394309] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.162 [2024-11-17 19:26:02.394324] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.162 [2024-11-17 19:26:02.394358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.097 19:26:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.097 19:26:03 -- common/autotest_common.sh@862 -- # return 0 00:17:05.097 19:26:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:05.097 19:26:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.097 19:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.097 19:26:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.097 19:26:03 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:05.355 [2024-11-17 19:26:03.552224] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:05.355 19:26:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:05.355 19:26:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:05.355 19:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.355 ************************************ 00:17:05.355 START TEST lvs_grow_clean 00:17:05.355 ************************************ 00:17:05.355 19:26:03 -- common/autotest_common.sh@1114 -- # lvs_grow 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:05.355 19:26:03 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:05.615 19:26:03 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:05.615 19:26:03 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:05.873 19:26:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=19eb0bb3-f415-4493-a30b-16daff84482d 00:17:05.873 19:26:04 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:05.873 19:26:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:06.131 19:26:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:06.131 19:26:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:06.131 19:26:04 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 19eb0bb3-f415-4493-a30b-16daff84482d lvol 150 00:17:06.701 19:26:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b238e58f-e385-45d0-9da4-fc4641951597 00:17:06.701 19:26:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:06.701 19:26:04 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:06.701 [2024-11-17 19:26:04.906070] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:06.701 [2024-11-17 19:26:04.906170] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:06.701 true 00:17:06.701 19:26:04 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:06.701 19:26:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:06.961 19:26:05 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:06.961 19:26:05 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:07.221 19:26:05 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b238e58f-e385-45d0-9da4-fc4641951597 00:17:07.480 19:26:05 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:07.738 [2024-11-17 19:26:05.957302] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.738 19:26:05 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:07.996 19:26:06 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1192349 00:17:07.996 19:26:06 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:07.996 19:26:06 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:07.996 19:26:06 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1192349 /var/tmp/bdevperf.sock 00:17:07.996 19:26:06 -- common/autotest_common.sh@829 -- # '[' -z 1192349 ']' 00:17:07.996 19:26:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.996 19:26:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.996 19:26:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.996 19:26:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.996 19:26:06 -- common/autotest_common.sh@10 -- # set +x 00:17:08.256 [2024-11-17 19:26:06.264120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:08.256 [2024-11-17 19:26:06.264189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192349 ] 00:17:08.256 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.256 [2024-11-17 19:26:06.324171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.256 [2024-11-17 19:26:06.413278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.515 19:26:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.515 19:26:06 -- common/autotest_common.sh@862 -- # return 0 00:17:08.515 19:26:06 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:08.772 Nvme0n1 00:17:08.772 19:26:06 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:09.030 [ 00:17:09.030 { 00:17:09.030 "name": "Nvme0n1", 00:17:09.030 "aliases": [ 00:17:09.030 "b238e58f-e385-45d0-9da4-fc4641951597" 00:17:09.030 ], 00:17:09.030 "product_name": "NVMe disk", 00:17:09.030 "block_size": 4096, 00:17:09.030 "num_blocks": 38912, 00:17:09.030 "uuid": "b238e58f-e385-45d0-9da4-fc4641951597", 00:17:09.030 "assigned_rate_limits": { 00:17:09.030 "rw_ios_per_sec": 0, 00:17:09.030 "rw_mbytes_per_sec": 0, 00:17:09.030 "r_mbytes_per_sec": 0, 00:17:09.030 "w_mbytes_per_sec": 0 00:17:09.030 }, 00:17:09.030 "claimed": false, 00:17:09.030 "zoned": false, 00:17:09.030 "supported_io_types": { 00:17:09.030 "read": true, 00:17:09.030 "write": true, 00:17:09.030 "unmap": true, 00:17:09.030 "write_zeroes": true, 00:17:09.030 "flush": true, 00:17:09.030 "reset": true, 00:17:09.030 "compare": true, 00:17:09.030 "compare_and_write": true, 00:17:09.030 "abort": true, 00:17:09.030 "nvme_admin": true, 00:17:09.030 "nvme_io": true 00:17:09.030 }, 00:17:09.030 "driver_specific": { 00:17:09.030 "nvme": [ 00:17:09.030 { 00:17:09.030 "trid": { 00:17:09.030 "trtype": "TCP", 00:17:09.030 "adrfam": "IPv4", 00:17:09.030 "traddr": "10.0.0.2", 00:17:09.030 "trsvcid": "4420", 00:17:09.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:09.030 }, 00:17:09.030 "ctrlr_data": { 00:17:09.030 "cntlid": 1, 00:17:09.030 "vendor_id": "0x8086", 00:17:09.030 "model_number": "SPDK bdev Controller", 00:17:09.030 "serial_number": "SPDK0", 00:17:09.030 "firmware_revision": "24.01.1", 00:17:09.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:09.030 "oacs": { 00:17:09.030 "security": 0, 00:17:09.030 "format": 0, 00:17:09.030 "firmware": 0, 00:17:09.030 "ns_manage": 0 00:17:09.030 }, 00:17:09.030 "multi_ctrlr": true, 00:17:09.030 "ana_reporting": false 00:17:09.030 }, 00:17:09.031 "vs": { 00:17:09.031 "nvme_version": "1.3" 00:17:09.031 }, 00:17:09.031 "ns_data": { 00:17:09.031 "id": 1, 00:17:09.031 "can_share": true 00:17:09.031 } 00:17:09.031 } 00:17:09.031 ], 00:17:09.031 "mp_policy": "active_passive" 00:17:09.031 } 00:17:09.031 } 00:17:09.031 ] 00:17:09.031 19:26:07 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1192488 00:17:09.031 19:26:07 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:09.031 19:26:07 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:09.290 Running I/O for 10 seconds... 00:17:10.229 Latency(us) 00:17:10.229 [2024-11-17T18:26:08.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.229 [2024-11-17T18:26:08.496Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.229 Nvme0n1 : 1.00 15054.00 58.80 0.00 0.00 0.00 0.00 0.00 00:17:10.229 [2024-11-17T18:26:08.496Z] =================================================================================================================== 00:17:10.229 [2024-11-17T18:26:08.496Z] Total : 15054.00 58.80 0.00 0.00 0.00 0.00 0.00 00:17:10.229 00:17:11.163 19:26:09 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:11.163 [2024-11-17T18:26:09.430Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.163 Nvme0n1 : 2.00 15124.00 59.08 0.00 0.00 0.00 0.00 0.00 00:17:11.163 [2024-11-17T18:26:09.430Z] =================================================================================================================== 00:17:11.163 [2024-11-17T18:26:09.430Z] Total : 15124.00 59.08 0.00 0.00 0.00 0.00 0.00 00:17:11.163 00:17:11.421 true 00:17:11.421 19:26:09 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:11.421 19:26:09 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:11.681 19:26:09 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:11.681 19:26:09 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:11.681 19:26:09 -- target/nvmf_lvs_grow.sh@65 -- # wait 1192488 00:17:12.250 [2024-11-17T18:26:10.517Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.250 Nvme0n1 : 3.00 15250.33 59.57 0.00 0.00 0.00 0.00 0.00 00:17:12.250 [2024-11-17T18:26:10.517Z] =================================================================================================================== 00:17:12.250 [2024-11-17T18:26:10.517Z] Total : 15250.33 59.57 0.00 0.00 0.00 0.00 0.00 00:17:12.250 00:17:13.190 [2024-11-17T18:26:11.457Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.190 Nvme0n1 : 4.00 15297.50 59.76 0.00 0.00 0.00 0.00 0.00 00:17:13.190 [2024-11-17T18:26:11.457Z] =================================================================================================================== 00:17:13.190 [2024-11-17T18:26:11.457Z] Total : 15297.50 59.76 0.00 0.00 0.00 0.00 0.00 00:17:13.190 00:17:14.126 [2024-11-17T18:26:12.393Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.126 Nvme0n1 : 5.00 15339.20 59.92 0.00 0.00 0.00 0.00 0.00 00:17:14.126 [2024-11-17T18:26:12.393Z] =================================================================================================================== 00:17:14.126 [2024-11-17T18:26:12.393Z] Total : 15339.20 59.92 0.00 0.00 0.00 0.00 0.00 00:17:14.126 00:17:15.507 [2024-11-17T18:26:13.774Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.507 Nvme0n1 : 6.00 15378.33 60.07 0.00 0.00 0.00 0.00 0.00 00:17:15.507 [2024-11-17T18:26:13.774Z] =================================================================================================================== 00:17:15.507 [2024-11-17T18:26:13.774Z] Total : 15378.33 60.07 0.00 0.00 0.00 0.00 0.00 00:17:15.507 00:17:16.444 [2024-11-17T18:26:14.711Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.444 Nvme0n1 : 7.00 15415.57 60.22 0.00 0.00 0.00 0.00 0.00 00:17:16.444 [2024-11-17T18:26:14.711Z] =================================================================================================================== 00:17:16.444 [2024-11-17T18:26:14.711Z] Total : 15415.57 60.22 0.00 0.00 0.00 0.00 0.00 00:17:16.444 00:17:17.383 [2024-11-17T18:26:15.650Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.383 Nvme0n1 : 8.00 15446.75 60.34 0.00 0.00 0.00 0.00 0.00 00:17:17.383 [2024-11-17T18:26:15.650Z] =================================================================================================================== 00:17:17.383 [2024-11-17T18:26:15.650Z] Total : 15446.75 60.34 0.00 0.00 0.00 0.00 0.00 00:17:17.383 00:17:18.366 [2024-11-17T18:26:16.633Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.366 Nvme0n1 : 9.00 15480.89 60.47 0.00 0.00 0.00 0.00 0.00 00:17:18.366 [2024-11-17T18:26:16.633Z] =================================================================================================================== 00:17:18.366 [2024-11-17T18:26:16.633Z] Total : 15480.89 60.47 0.00 0.00 0.00 0.00 0.00 00:17:18.366 00:17:19.337 [2024-11-17T18:26:17.604Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.337 Nvme0n1 : 10.00 15503.30 60.56 0.00 0.00 0.00 0.00 0.00 00:17:19.337 [2024-11-17T18:26:17.604Z] =================================================================================================================== 00:17:19.337 [2024-11-17T18:26:17.604Z] Total : 15503.30 60.56 0.00 0.00 0.00 0.00 0.00 00:17:19.337 00:17:19.337 00:17:19.337 Latency(us) 00:17:19.337 [2024-11-17T18:26:17.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.337 [2024-11-17T18:26:17.604Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.337 Nvme0n1 : 10.01 15503.07 60.56 0.00 0.00 8251.65 4636.07 16214.09 00:17:19.337 [2024-11-17T18:26:17.604Z] =================================================================================================================== 00:17:19.337 [2024-11-17T18:26:17.604Z] Total : 15503.07 60.56 0.00 0.00 8251.65 4636.07 16214.09 00:17:19.337 0 00:17:19.337 19:26:17 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1192349 00:17:19.337 19:26:17 -- common/autotest_common.sh@936 -- # '[' -z 1192349 ']' 00:17:19.337 19:26:17 -- common/autotest_common.sh@940 -- # kill -0 1192349 00:17:19.337 19:26:17 -- common/autotest_common.sh@941 -- # uname 00:17:19.337 19:26:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.337 19:26:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192349 00:17:19.337 19:26:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:19.337 19:26:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:19.337 19:26:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192349' 00:17:19.337 killing process with pid 1192349 00:17:19.337 19:26:17 -- common/autotest_common.sh@955 -- # kill 1192349 00:17:19.337 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.337 00:17:19.337 Latency(us) 00:17:19.337 [2024-11-17T18:26:17.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.337 [2024-11-17T18:26:17.604Z] =================================================================================================================== 00:17:19.337 [2024-11-17T18:26:17.604Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.337 19:26:17 -- common/autotest_common.sh@960 -- # wait 1192349 00:17:19.595 19:26:17 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:19.854 19:26:17 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:19.854 19:26:17 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:20.113 19:26:18 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:20.113 19:26:18 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:20.113 19:26:18 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:20.372 [2024-11-17 19:26:18.466552] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:20.372 19:26:18 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:20.372 19:26:18 -- common/autotest_common.sh@650 -- # local es=0 00:17:20.372 19:26:18 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:20.372 19:26:18 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.372 19:26:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.372 19:26:18 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.372 19:26:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.372 19:26:18 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.372 19:26:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.372 19:26:18 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.372 19:26:18 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:20.372 19:26:18 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:20.630 request: 00:17:20.630 { 00:17:20.630 "uuid": "19eb0bb3-f415-4493-a30b-16daff84482d", 00:17:20.630 "method": "bdev_lvol_get_lvstores", 00:17:20.630 "req_id": 1 00:17:20.630 } 00:17:20.630 Got JSON-RPC error response 00:17:20.630 response: 00:17:20.630 { 00:17:20.630 "code": -19, 00:17:20.630 "message": "No such device" 00:17:20.630 } 00:17:20.630 19:26:18 -- common/autotest_common.sh@653 -- # es=1 00:17:20.630 19:26:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.630 19:26:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.630 19:26:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.630 19:26:18 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:20.888 aio_bdev 00:17:20.888 19:26:19 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b238e58f-e385-45d0-9da4-fc4641951597 00:17:20.888 19:26:19 -- common/autotest_common.sh@897 -- # local bdev_name=b238e58f-e385-45d0-9da4-fc4641951597 00:17:20.888 19:26:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:20.888 19:26:19 -- common/autotest_common.sh@899 -- # local i 00:17:20.888 19:26:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:20.888 19:26:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:20.888 19:26:19 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:21.147 19:26:19 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b238e58f-e385-45d0-9da4-fc4641951597 -t 2000 00:17:21.407 [ 00:17:21.407 { 00:17:21.407 "name": "b238e58f-e385-45d0-9da4-fc4641951597", 00:17:21.407 "aliases": [ 00:17:21.407 "lvs/lvol" 00:17:21.407 ], 00:17:21.407 "product_name": "Logical Volume", 00:17:21.407 "block_size": 4096, 00:17:21.407 "num_blocks": 38912, 00:17:21.407 "uuid": "b238e58f-e385-45d0-9da4-fc4641951597", 00:17:21.407 "assigned_rate_limits": { 00:17:21.407 "rw_ios_per_sec": 0, 00:17:21.407 "rw_mbytes_per_sec": 0, 00:17:21.407 "r_mbytes_per_sec": 0, 00:17:21.407 "w_mbytes_per_sec": 0 00:17:21.407 }, 00:17:21.407 "claimed": false, 00:17:21.407 "zoned": false, 00:17:21.407 "supported_io_types": { 00:17:21.407 "read": true, 00:17:21.407 "write": true, 00:17:21.407 "unmap": true, 00:17:21.407 "write_zeroes": true, 00:17:21.407 "flush": false, 00:17:21.407 "reset": true, 00:17:21.407 "compare": false, 00:17:21.407 "compare_and_write": false, 00:17:21.407 "abort": false, 00:17:21.407 "nvme_admin": false, 00:17:21.407 "nvme_io": false 00:17:21.407 }, 00:17:21.407 "driver_specific": { 00:17:21.407 "lvol": { 00:17:21.407 "lvol_store_uuid": "19eb0bb3-f415-4493-a30b-16daff84482d", 00:17:21.407 "base_bdev": "aio_bdev", 00:17:21.407 "thin_provision": false, 00:17:21.407 "snapshot": false, 00:17:21.407 "clone": false, 00:17:21.407 "esnap_clone": false 00:17:21.407 } 00:17:21.407 } 00:17:21.407 } 00:17:21.407 ] 00:17:21.407 19:26:19 -- common/autotest_common.sh@905 -- # return 0 00:17:21.407 19:26:19 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:21.407 19:26:19 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:21.665 19:26:19 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:21.665 19:26:19 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:21.665 19:26:19 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:21.924 19:26:20 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:21.924 19:26:20 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b238e58f-e385-45d0-9da4-fc4641951597 00:17:22.185 19:26:20 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19eb0bb3-f415-4493-a30b-16daff84482d 00:17:22.445 19:26:20 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.704 00:17:22.704 real 0m17.261s 00:17:22.704 user 0m16.925s 00:17:22.704 sys 0m1.764s 00:17:22.704 19:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:22.704 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.704 ************************************ 00:17:22.704 END TEST lvs_grow_clean 00:17:22.704 ************************************ 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:22.704 19:26:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:22.704 19:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.704 19:26:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.704 ************************************ 00:17:22.704 START TEST lvs_grow_dirty 00:17:22.704 ************************************ 00:17:22.704 19:26:20 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.704 19:26:20 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:22.963 19:26:21 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:22.963 19:26:21 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:23.221 19:26:21 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:23.221 19:26:21 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:23.221 19:26:21 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:23.480 19:26:21 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:23.480 19:26:21 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:23.480 19:26:21 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d lvol 150 00:17:23.739 19:26:21 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b06961b7-63b6-4214-a2d7-6b17cd82bf4a 00:17:23.739 19:26:21 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:23.739 19:26:21 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:23.997 [2024-11-17 19:26:22.141994] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:23.997 [2024-11-17 19:26:22.142082] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:23.997 true 00:17:23.998 19:26:22 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:23.998 19:26:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:24.255 19:26:22 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:24.255 19:26:22 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:24.515 19:26:22 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b06961b7-63b6-4214-a2d7-6b17cd82bf4a 00:17:24.775 19:26:22 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:25.035 19:26:23 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.296 19:26:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1194454 00:17:25.296 19:26:23 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:25.296 19:26:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:25.296 19:26:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1194454 /var/tmp/bdevperf.sock 00:17:25.296 19:26:23 -- common/autotest_common.sh@829 -- # '[' -z 1194454 ']' 00:17:25.296 19:26:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.296 19:26:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.296 19:26:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.296 19:26:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.296 19:26:23 -- common/autotest_common.sh@10 -- # set +x 00:17:25.296 [2024-11-17 19:26:23.461396] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:25.296 [2024-11-17 19:26:23.461478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194454 ] 00:17:25.296 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.296 [2024-11-17 19:26:23.526078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.555 [2024-11-17 19:26:23.615847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.488 19:26:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.488 19:26:24 -- common/autotest_common.sh@862 -- # return 0 00:17:26.488 19:26:24 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:26.746 Nvme0n1 00:17:26.746 19:26:24 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:27.005 [ 00:17:27.005 { 00:17:27.005 "name": "Nvme0n1", 00:17:27.005 "aliases": [ 00:17:27.005 "b06961b7-63b6-4214-a2d7-6b17cd82bf4a" 00:17:27.005 ], 00:17:27.005 "product_name": "NVMe disk", 00:17:27.005 "block_size": 4096, 00:17:27.005 "num_blocks": 38912, 00:17:27.005 "uuid": "b06961b7-63b6-4214-a2d7-6b17cd82bf4a", 00:17:27.005 "assigned_rate_limits": { 00:17:27.005 "rw_ios_per_sec": 0, 00:17:27.005 "rw_mbytes_per_sec": 0, 00:17:27.005 "r_mbytes_per_sec": 0, 00:17:27.005 "w_mbytes_per_sec": 0 00:17:27.005 }, 00:17:27.005 "claimed": false, 00:17:27.005 "zoned": false, 00:17:27.005 "supported_io_types": { 00:17:27.005 "read": true, 00:17:27.005 "write": true, 00:17:27.005 "unmap": true, 00:17:27.005 "write_zeroes": true, 00:17:27.005 "flush": true, 00:17:27.005 "reset": true, 00:17:27.005 "compare": true, 00:17:27.005 "compare_and_write": true, 00:17:27.005 "abort": true, 00:17:27.005 "nvme_admin": true, 00:17:27.005 "nvme_io": true 00:17:27.005 }, 00:17:27.005 "driver_specific": { 00:17:27.005 "nvme": [ 00:17:27.005 { 00:17:27.005 "trid": { 00:17:27.005 "trtype": "TCP", 00:17:27.005 "adrfam": "IPv4", 00:17:27.005 "traddr": "10.0.0.2", 00:17:27.005 "trsvcid": "4420", 00:17:27.005 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:27.005 }, 00:17:27.005 "ctrlr_data": { 00:17:27.005 "cntlid": 1, 00:17:27.005 "vendor_id": "0x8086", 00:17:27.005 "model_number": "SPDK bdev Controller", 00:17:27.005 "serial_number": "SPDK0", 00:17:27.005 "firmware_revision": "24.01.1", 00:17:27.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:27.005 "oacs": { 00:17:27.005 "security": 0, 00:17:27.005 "format": 0, 00:17:27.005 "firmware": 0, 00:17:27.005 "ns_manage": 0 00:17:27.005 }, 00:17:27.005 "multi_ctrlr": true, 00:17:27.005 "ana_reporting": false 00:17:27.005 }, 00:17:27.005 "vs": { 00:17:27.005 "nvme_version": "1.3" 00:17:27.005 }, 00:17:27.005 "ns_data": { 00:17:27.005 "id": 1, 00:17:27.005 "can_share": true 00:17:27.005 } 00:17:27.005 } 00:17:27.005 ], 00:17:27.006 "mp_policy": "active_passive" 00:17:27.006 } 00:17:27.006 } 00:17:27.006 ] 00:17:27.006 19:26:25 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1194723 00:17:27.006 19:26:25 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:27.006 19:26:25 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:27.006 Running I/O for 10 seconds... 00:17:27.944 Latency(us) 00:17:27.944 [2024-11-17T18:26:26.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.944 [2024-11-17T18:26:26.211Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.944 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:17:27.944 [2024-11-17T18:26:26.211Z] =================================================================================================================== 00:17:27.944 [2024-11-17T18:26:26.211Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:17:27.944 00:17:28.880 19:26:27 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:29.136 [2024-11-17T18:26:27.403Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.136 Nvme0n1 : 2.00 15155.00 59.20 0.00 0.00 0.00 0.00 0.00 00:17:29.136 [2024-11-17T18:26:27.403Z] =================================================================================================================== 00:17:29.136 [2024-11-17T18:26:27.403Z] Total : 15155.00 59.20 0.00 0.00 0.00 0.00 0.00 00:17:29.136 00:17:29.136 true 00:17:29.136 19:26:27 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:29.136 19:26:27 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:29.398 19:26:27 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:29.398 19:26:27 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:29.398 19:26:27 -- target/nvmf_lvs_grow.sh@65 -- # wait 1194723 00:17:29.965 [2024-11-17T18:26:28.232Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.965 Nvme0n1 : 3.00 15169.00 59.25 0.00 0.00 0.00 0.00 0.00 00:17:29.965 [2024-11-17T18:26:28.232Z] =================================================================================================================== 00:17:29.965 [2024-11-17T18:26:28.232Z] Total : 15169.00 59.25 0.00 0.00 0.00 0.00 0.00 00:17:29.965 00:17:31.340 [2024-11-17T18:26:29.607Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.340 Nvme0n1 : 4.00 15171.25 59.26 0.00 0.00 0.00 0.00 0.00 00:17:31.340 [2024-11-17T18:26:29.607Z] =================================================================================================================== 00:17:31.340 [2024-11-17T18:26:29.607Z] Total : 15171.25 59.26 0.00 0.00 0.00 0.00 0.00 00:17:31.340 00:17:32.273 [2024-11-17T18:26:30.540Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.273 Nvme0n1 : 5.00 15136.80 59.13 0.00 0.00 0.00 0.00 0.00 00:17:32.273 [2024-11-17T18:26:30.540Z] =================================================================================================================== 00:17:32.273 [2024-11-17T18:26:30.540Z] Total : 15136.80 59.13 0.00 0.00 0.00 0.00 0.00 00:17:32.273 00:17:33.211 [2024-11-17T18:26:31.478Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.211 Nvme0n1 : 6.00 15149.83 59.18 0.00 0.00 0.00 0.00 0.00 00:17:33.211 [2024-11-17T18:26:31.478Z] =================================================================================================================== 00:17:33.211 [2024-11-17T18:26:31.478Z] Total : 15149.83 59.18 0.00 0.00 0.00 0.00 0.00 00:17:33.211 00:17:34.147 [2024-11-17T18:26:32.414Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.147 Nvme0n1 : 7.00 15182.57 59.31 0.00 0.00 0.00 0.00 0.00 00:17:34.147 [2024-11-17T18:26:32.414Z] =================================================================================================================== 00:17:34.147 [2024-11-17T18:26:32.414Z] Total : 15182.57 59.31 0.00 0.00 0.00 0.00 0.00 00:17:34.147 00:17:35.083 [2024-11-17T18:26:33.350Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.083 Nvme0n1 : 8.00 15206.75 59.40 0.00 0.00 0.00 0.00 0.00 00:17:35.083 [2024-11-17T18:26:33.350Z] =================================================================================================================== 00:17:35.083 [2024-11-17T18:26:33.350Z] Total : 15206.75 59.40 0.00 0.00 0.00 0.00 0.00 00:17:35.083 00:17:36.019 [2024-11-17T18:26:34.286Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.019 Nvme0n1 : 9.00 15218.33 59.45 0.00 0.00 0.00 0.00 0.00 00:17:36.019 [2024-11-17T18:26:34.286Z] =================================================================================================================== 00:17:36.019 [2024-11-17T18:26:34.286Z] Total : 15218.33 59.45 0.00 0.00 0.00 0.00 0.00 00:17:36.019 00:17:36.954 [2024-11-17T18:26:35.221Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.954 Nvme0n1 : 10.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:17:36.954 [2024-11-17T18:26:35.221Z] =================================================================================================================== 00:17:36.954 [2024-11-17T18:26:35.221Z] Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:17:36.954 00:17:36.954 00:17:36.954 Latency(us) 00:17:36.954 [2024-11-17T18:26:35.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.954 [2024-11-17T18:26:35.221Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.954 Nvme0n1 : 10.01 15240.55 59.53 0.00 0.00 8392.73 4538.97 19223.89 00:17:36.954 [2024-11-17T18:26:35.221Z] =================================================================================================================== 00:17:36.954 [2024-11-17T18:26:35.221Z] Total : 15240.55 59.53 0.00 0.00 8392.73 4538.97 19223.89 00:17:36.954 0 00:17:37.215 19:26:35 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1194454 00:17:37.215 19:26:35 -- common/autotest_common.sh@936 -- # '[' -z 1194454 ']' 00:17:37.215 19:26:35 -- common/autotest_common.sh@940 -- # kill -0 1194454 00:17:37.215 19:26:35 -- common/autotest_common.sh@941 -- # uname 00:17:37.215 19:26:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.215 19:26:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1194454 00:17:37.215 19:26:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:37.215 19:26:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:37.215 19:26:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1194454' 00:17:37.215 killing process with pid 1194454 00:17:37.215 19:26:35 -- common/autotest_common.sh@955 -- # kill 1194454 00:17:37.215 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.215 00:17:37.215 Latency(us) 00:17:37.215 [2024-11-17T18:26:35.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.215 [2024-11-17T18:26:35.482Z] =================================================================================================================== 00:17:37.215 [2024-11-17T18:26:35.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.215 19:26:35 -- common/autotest_common.sh@960 -- # wait 1194454 00:17:37.473 19:26:35 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:37.731 19:26:35 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:37.731 19:26:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:37.990 19:26:36 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:37.990 19:26:36 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:37.990 19:26:36 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1191791 00:17:37.990 19:26:36 -- target/nvmf_lvs_grow.sh@74 -- # wait 1191791 00:17:37.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1191791 Killed "${NVMF_APP[@]}" "$@" 00:17:37.990 19:26:36 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:37.990 19:26:36 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:37.990 19:26:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:37.990 19:26:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:37.990 19:26:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.990 19:26:36 -- nvmf/common.sh@469 -- # nvmfpid=1195967 00:17:37.990 19:26:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:37.990 19:26:36 -- nvmf/common.sh@470 -- # waitforlisten 1195967 00:17:37.990 19:26:36 -- common/autotest_common.sh@829 -- # '[' -z 1195967 ']' 00:17:37.990 19:26:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.990 19:26:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.990 19:26:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.990 19:26:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.990 19:26:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.990 [2024-11-17 19:26:36.144364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:37.990 [2024-11-17 19:26:36.144434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.990 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.990 [2024-11-17 19:26:36.211365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.250 [2024-11-17 19:26:36.298177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:38.250 [2024-11-17 19:26:36.298323] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.250 [2024-11-17 19:26:36.298339] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.250 [2024-11-17 19:26:36.298351] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.250 [2024-11-17 19:26:36.298392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.190 19:26:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.190 19:26:37 -- common/autotest_common.sh@862 -- # return 0 00:17:39.190 19:26:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:39.190 19:26:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.190 19:26:37 -- common/autotest_common.sh@10 -- # set +x 00:17:39.190 19:26:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.190 19:26:37 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:39.190 [2024-11-17 19:26:37.429264] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:39.190 [2024-11-17 19:26:37.429407] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:39.190 [2024-11-17 19:26:37.429465] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:39.190 19:26:37 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:39.190 19:26:37 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev b06961b7-63b6-4214-a2d7-6b17cd82bf4a 00:17:39.190 19:26:37 -- common/autotest_common.sh@897 -- # local bdev_name=b06961b7-63b6-4214-a2d7-6b17cd82bf4a 00:17:39.190 19:26:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:39.190 19:26:37 -- common/autotest_common.sh@899 -- # local i 00:17:39.190 19:26:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:39.190 19:26:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:39.190 19:26:37 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:39.759 19:26:37 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b06961b7-63b6-4214-a2d7-6b17cd82bf4a -t 2000 00:17:40.019 [ 00:17:40.019 { 00:17:40.019 "name": "b06961b7-63b6-4214-a2d7-6b17cd82bf4a", 00:17:40.019 "aliases": [ 00:17:40.019 "lvs/lvol" 00:17:40.019 ], 00:17:40.019 "product_name": "Logical Volume", 00:17:40.019 "block_size": 4096, 00:17:40.019 "num_blocks": 38912, 00:17:40.019 "uuid": "b06961b7-63b6-4214-a2d7-6b17cd82bf4a", 00:17:40.019 "assigned_rate_limits": { 00:17:40.019 "rw_ios_per_sec": 0, 00:17:40.019 "rw_mbytes_per_sec": 0, 00:17:40.019 "r_mbytes_per_sec": 0, 00:17:40.019 "w_mbytes_per_sec": 0 00:17:40.019 }, 00:17:40.019 "claimed": false, 00:17:40.019 "zoned": false, 00:17:40.019 "supported_io_types": { 00:17:40.019 "read": true, 00:17:40.019 "write": true, 00:17:40.019 "unmap": true, 00:17:40.019 "write_zeroes": true, 00:17:40.019 "flush": false, 00:17:40.019 "reset": true, 00:17:40.019 "compare": false, 00:17:40.019 "compare_and_write": false, 00:17:40.019 "abort": false, 00:17:40.019 "nvme_admin": false, 00:17:40.019 "nvme_io": false 00:17:40.019 }, 00:17:40.019 "driver_specific": { 00:17:40.019 "lvol": { 00:17:40.019 "lvol_store_uuid": "b44b65fc-0472-49c5-8f0c-7d79a04b2e7d", 00:17:40.019 "base_bdev": "aio_bdev", 00:17:40.019 "thin_provision": false, 00:17:40.019 "snapshot": false, 00:17:40.019 "clone": false, 00:17:40.019 "esnap_clone": false 00:17:40.019 } 00:17:40.019 } 00:17:40.019 } 00:17:40.019 ] 00:17:40.019 19:26:38 -- common/autotest_common.sh@905 -- # return 0 00:17:40.020 19:26:38 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:40.020 19:26:38 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:40.278 19:26:38 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:40.278 19:26:38 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:40.278 19:26:38 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:40.538 19:26:38 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:40.538 19:26:38 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:40.538 [2024-11-17 19:26:38.798576] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:40.798 19:26:38 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:40.798 19:26:38 -- common/autotest_common.sh@650 -- # local es=0 00:17:40.798 19:26:38 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:40.798 19:26:38 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.798 19:26:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.798 19:26:38 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.798 19:26:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.798 19:26:38 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.798 19:26:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.798 19:26:38 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.798 19:26:38 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:40.798 19:26:38 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:41.056 request: 00:17:41.056 { 00:17:41.056 "uuid": "b44b65fc-0472-49c5-8f0c-7d79a04b2e7d", 00:17:41.056 "method": "bdev_lvol_get_lvstores", 00:17:41.056 "req_id": 1 00:17:41.056 } 00:17:41.056 Got JSON-RPC error response 00:17:41.056 response: 00:17:41.056 { 00:17:41.056 "code": -19, 00:17:41.056 "message": "No such device" 00:17:41.056 } 00:17:41.056 19:26:39 -- common/autotest_common.sh@653 -- # es=1 00:17:41.056 19:26:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.056 19:26:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.056 19:26:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.056 19:26:39 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:41.313 aio_bdev 00:17:41.313 19:26:39 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b06961b7-63b6-4214-a2d7-6b17cd82bf4a 00:17:41.313 19:26:39 -- common/autotest_common.sh@897 -- # local bdev_name=b06961b7-63b6-4214-a2d7-6b17cd82bf4a 00:17:41.313 19:26:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:41.313 19:26:39 -- common/autotest_common.sh@899 -- # local i 00:17:41.313 19:26:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:41.313 19:26:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:41.313 19:26:39 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:41.572 19:26:39 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b06961b7-63b6-4214-a2d7-6b17cd82bf4a -t 2000 00:17:41.833 [ 00:17:41.833 { 00:17:41.833 "name": "b06961b7-63b6-4214-a2d7-6b17cd82bf4a", 00:17:41.833 "aliases": [ 00:17:41.833 "lvs/lvol" 00:17:41.833 ], 00:17:41.833 "product_name": "Logical Volume", 00:17:41.833 "block_size": 4096, 00:17:41.833 "num_blocks": 38912, 00:17:41.833 "uuid": "b06961b7-63b6-4214-a2d7-6b17cd82bf4a", 00:17:41.833 "assigned_rate_limits": { 00:17:41.834 "rw_ios_per_sec": 0, 00:17:41.834 "rw_mbytes_per_sec": 0, 00:17:41.834 "r_mbytes_per_sec": 0, 00:17:41.834 "w_mbytes_per_sec": 0 00:17:41.834 }, 00:17:41.834 "claimed": false, 00:17:41.834 "zoned": false, 00:17:41.834 "supported_io_types": { 00:17:41.834 "read": true, 00:17:41.834 "write": true, 00:17:41.834 "unmap": true, 00:17:41.834 "write_zeroes": true, 00:17:41.834 "flush": false, 00:17:41.834 "reset": true, 00:17:41.834 "compare": false, 00:17:41.834 "compare_and_write": false, 00:17:41.834 "abort": false, 00:17:41.834 "nvme_admin": false, 00:17:41.834 "nvme_io": false 00:17:41.834 }, 00:17:41.834 "driver_specific": { 00:17:41.834 "lvol": { 00:17:41.834 "lvol_store_uuid": "b44b65fc-0472-49c5-8f0c-7d79a04b2e7d", 00:17:41.834 "base_bdev": "aio_bdev", 00:17:41.834 "thin_provision": false, 00:17:41.834 "snapshot": false, 00:17:41.834 "clone": false, 00:17:41.834 "esnap_clone": false 00:17:41.834 } 00:17:41.834 } 00:17:41.834 } 00:17:41.834 ] 00:17:41.834 19:26:39 -- common/autotest_common.sh@905 -- # return 0 00:17:41.834 19:26:39 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:41.834 19:26:39 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:42.093 19:26:40 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:42.093 19:26:40 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:42.093 19:26:40 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:42.351 19:26:40 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:42.351 19:26:40 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b06961b7-63b6-4214-a2d7-6b17cd82bf4a 00:17:42.610 19:26:40 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b44b65fc-0472-49c5-8f0c-7d79a04b2e7d 00:17:42.869 19:26:40 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:43.129 19:26:41 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:43.129 00:17:43.129 real 0m20.322s 00:17:43.129 user 0m50.303s 00:17:43.129 sys 0m4.689s 00:17:43.129 19:26:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:43.129 19:26:41 -- common/autotest_common.sh@10 -- # set +x 00:17:43.129 ************************************ 00:17:43.129 END TEST lvs_grow_dirty 00:17:43.129 ************************************ 00:17:43.129 19:26:41 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:43.130 19:26:41 -- common/autotest_common.sh@806 -- # type=--id 00:17:43.130 19:26:41 -- common/autotest_common.sh@807 -- # id=0 00:17:43.130 19:26:41 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:43.130 19:26:41 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:43.130 19:26:41 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:43.130 19:26:41 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:43.130 19:26:41 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:43.130 19:26:41 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:43.130 nvmf_trace.0 00:17:43.130 19:26:41 -- common/autotest_common.sh@821 -- # return 0 00:17:43.130 19:26:41 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:43.130 19:26:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:43.130 19:26:41 -- nvmf/common.sh@116 -- # sync 00:17:43.130 19:26:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:43.130 19:26:41 -- nvmf/common.sh@119 -- # set +e 00:17:43.130 19:26:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:43.130 19:26:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:43.130 rmmod nvme_tcp 00:17:43.130 rmmod nvme_fabrics 00:17:43.130 rmmod nvme_keyring 00:17:43.130 19:26:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:43.130 19:26:41 -- nvmf/common.sh@123 -- # set -e 00:17:43.130 19:26:41 -- nvmf/common.sh@124 -- # return 0 00:17:43.130 19:26:41 -- nvmf/common.sh@477 -- # '[' -n 1195967 ']' 00:17:43.130 19:26:41 -- nvmf/common.sh@478 -- # killprocess 1195967 00:17:43.130 19:26:41 -- common/autotest_common.sh@936 -- # '[' -z 1195967 ']' 00:17:43.130 19:26:41 -- common/autotest_common.sh@940 -- # kill -0 1195967 00:17:43.130 19:26:41 -- common/autotest_common.sh@941 -- # uname 00:17:43.130 19:26:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.130 19:26:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1195967 00:17:43.130 19:26:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:43.130 19:26:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:43.130 19:26:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1195967' 00:17:43.130 killing process with pid 1195967 00:17:43.130 19:26:41 -- common/autotest_common.sh@955 -- # kill 1195967 00:17:43.130 19:26:41 -- common/autotest_common.sh@960 -- # wait 1195967 00:17:43.389 19:26:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:43.389 19:26:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:43.389 19:26:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:43.389 19:26:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.389 19:26:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:43.389 19:26:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.389 19:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.389 19:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.922 19:26:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:45.922 00:17:45.922 real 0m43.669s 00:17:45.922 user 1m14.177s 00:17:45.922 sys 0m8.234s 00:17:45.922 19:26:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:45.922 19:26:43 -- common/autotest_common.sh@10 -- # set +x 00:17:45.922 ************************************ 00:17:45.922 END TEST nvmf_lvs_grow 00:17:45.922 ************************************ 00:17:45.922 19:26:43 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:45.922 19:26:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:45.922 19:26:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:45.922 19:26:43 -- common/autotest_common.sh@10 -- # set +x 00:17:45.922 ************************************ 00:17:45.922 START TEST nvmf_bdev_io_wait 00:17:45.922 ************************************ 00:17:45.923 19:26:43 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:45.923 * Looking for test storage... 00:17:45.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.923 19:26:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:45.923 19:26:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:45.923 19:26:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:45.923 19:26:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:45.923 19:26:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:45.923 19:26:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:45.923 19:26:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:45.923 19:26:43 -- scripts/common.sh@335 -- # IFS=.-: 00:17:45.923 19:26:43 -- scripts/common.sh@335 -- # read -ra ver1 00:17:45.923 19:26:43 -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.923 19:26:43 -- scripts/common.sh@336 -- # read -ra ver2 00:17:45.923 19:26:43 -- scripts/common.sh@337 -- # local 'op=<' 00:17:45.923 19:26:43 -- scripts/common.sh@339 -- # ver1_l=2 00:17:45.923 19:26:43 -- scripts/common.sh@340 -- # ver2_l=1 00:17:45.923 19:26:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:45.923 19:26:43 -- scripts/common.sh@343 -- # case "$op" in 00:17:45.923 19:26:43 -- scripts/common.sh@344 -- # : 1 00:17:45.923 19:26:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:45.923 19:26:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.923 19:26:43 -- scripts/common.sh@364 -- # decimal 1 00:17:45.923 19:26:43 -- scripts/common.sh@352 -- # local d=1 00:17:45.923 19:26:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.923 19:26:43 -- scripts/common.sh@354 -- # echo 1 00:17:45.923 19:26:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:45.923 19:26:43 -- scripts/common.sh@365 -- # decimal 2 00:17:45.923 19:26:43 -- scripts/common.sh@352 -- # local d=2 00:17:45.923 19:26:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.923 19:26:43 -- scripts/common.sh@354 -- # echo 2 00:17:45.923 19:26:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:45.923 19:26:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:45.923 19:26:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:45.923 19:26:43 -- scripts/common.sh@367 -- # return 0 00:17:45.923 19:26:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.923 19:26:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:45.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.923 --rc genhtml_branch_coverage=1 00:17:45.923 --rc genhtml_function_coverage=1 00:17:45.923 --rc genhtml_legend=1 00:17:45.923 --rc geninfo_all_blocks=1 00:17:45.923 --rc geninfo_unexecuted_blocks=1 00:17:45.923 00:17:45.923 ' 00:17:45.923 19:26:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:45.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.923 --rc genhtml_branch_coverage=1 00:17:45.923 --rc genhtml_function_coverage=1 00:17:45.923 --rc genhtml_legend=1 00:17:45.923 --rc geninfo_all_blocks=1 00:17:45.923 --rc geninfo_unexecuted_blocks=1 00:17:45.923 00:17:45.923 ' 00:17:45.923 19:26:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:45.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.923 --rc genhtml_branch_coverage=1 00:17:45.923 --rc genhtml_function_coverage=1 00:17:45.923 --rc genhtml_legend=1 00:17:45.923 --rc geninfo_all_blocks=1 00:17:45.923 --rc geninfo_unexecuted_blocks=1 00:17:45.923 00:17:45.923 ' 00:17:45.923 19:26:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:45.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.923 --rc genhtml_branch_coverage=1 00:17:45.923 --rc genhtml_function_coverage=1 00:17:45.923 --rc genhtml_legend=1 00:17:45.923 --rc geninfo_all_blocks=1 00:17:45.923 --rc geninfo_unexecuted_blocks=1 00:17:45.923 00:17:45.923 ' 00:17:45.923 19:26:43 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.923 19:26:43 -- nvmf/common.sh@7 -- # uname -s 00:17:45.923 19:26:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.923 19:26:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.923 19:26:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.923 19:26:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.923 19:26:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.923 19:26:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.923 19:26:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.923 19:26:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.923 19:26:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.923 19:26:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.923 19:26:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.923 19:26:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.923 19:26:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.923 19:26:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.923 19:26:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.923 19:26:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.923 19:26:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.923 19:26:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.923 19:26:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.923 19:26:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.923 19:26:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.923 19:26:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.923 19:26:43 -- paths/export.sh@5 -- # export PATH 00:17:45.923 19:26:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.923 19:26:43 -- nvmf/common.sh@46 -- # : 0 00:17:45.923 19:26:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:45.923 19:26:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:45.923 19:26:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:45.923 19:26:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.924 19:26:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.924 19:26:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:45.924 19:26:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:45.924 19:26:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:45.924 19:26:43 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:45.924 19:26:43 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:45.924 19:26:43 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:45.924 19:26:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:45.924 19:26:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.924 19:26:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:45.924 19:26:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:45.924 19:26:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:45.924 19:26:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.924 19:26:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.924 19:26:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.924 19:26:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:45.924 19:26:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:45.924 19:26:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:45.924 19:26:43 -- common/autotest_common.sh@10 -- # set +x 00:17:47.827 19:26:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:47.827 19:26:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:47.827 19:26:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:47.827 19:26:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:47.827 19:26:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:47.827 19:26:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:47.827 19:26:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:47.827 19:26:45 -- nvmf/common.sh@294 -- # net_devs=() 00:17:47.827 19:26:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:47.827 19:26:45 -- nvmf/common.sh@295 -- # e810=() 00:17:47.827 19:26:45 -- nvmf/common.sh@295 -- # local -ga e810 00:17:47.827 19:26:45 -- nvmf/common.sh@296 -- # x722=() 00:17:47.827 19:26:45 -- nvmf/common.sh@296 -- # local -ga x722 00:17:47.827 19:26:45 -- nvmf/common.sh@297 -- # mlx=() 00:17:47.827 19:26:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:47.827 19:26:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.827 19:26:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:47.827 19:26:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:47.827 19:26:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:47.827 19:26:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:47.827 19:26:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:47.827 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:47.827 19:26:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:47.827 19:26:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:47.827 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:47.827 19:26:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:47.827 19:26:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:47.827 19:26:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.827 19:26:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:47.827 19:26:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.827 19:26:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:47.827 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:47.827 19:26:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.827 19:26:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:47.827 19:26:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.827 19:26:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:47.827 19:26:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.827 19:26:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:47.827 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:47.827 19:26:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.827 19:26:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:47.827 19:26:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:47.827 19:26:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:47.827 19:26:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.827 19:26:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.827 19:26:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.827 19:26:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:47.827 19:26:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.827 19:26:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.827 19:26:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:47.827 19:26:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.827 19:26:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.827 19:26:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:47.827 19:26:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:47.827 19:26:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.827 19:26:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.827 19:26:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.827 19:26:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.827 19:26:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:47.827 19:26:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.827 19:26:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.827 19:26:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.827 19:26:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:47.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:47.827 00:17:47.827 --- 10.0.0.2 ping statistics --- 00:17:47.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.827 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:47.827 19:26:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:17:47.827 00:17:47.827 --- 10.0.0.1 ping statistics --- 00:17:47.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.827 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:47.827 19:26:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.827 19:26:45 -- nvmf/common.sh@410 -- # return 0 00:17:47.827 19:26:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:47.827 19:26:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.827 19:26:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:47.827 19:26:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.827 19:26:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:47.827 19:26:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:47.827 19:26:45 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:47.827 19:26:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:47.827 19:26:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.827 19:26:45 -- common/autotest_common.sh@10 -- # set +x 00:17:47.827 19:26:45 -- nvmf/common.sh@469 -- # nvmfpid=1198650 00:17:47.827 19:26:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:47.827 19:26:45 -- nvmf/common.sh@470 -- # waitforlisten 1198650 00:17:47.827 19:26:45 -- common/autotest_common.sh@829 -- # '[' -z 1198650 ']' 00:17:47.827 19:26:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.827 19:26:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.827 19:26:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.827 19:26:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.827 19:26:45 -- common/autotest_common.sh@10 -- # set +x 00:17:47.827 [2024-11-17 19:26:45.984019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:47.827 [2024-11-17 19:26:45.984123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.828 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.828 [2024-11-17 19:26:46.052692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.086 [2024-11-17 19:26:46.144115] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:48.086 [2024-11-17 19:26:46.144292] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.086 [2024-11-17 19:26:46.144312] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.086 [2024-11-17 19:26:46.144326] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.086 [2024-11-17 19:26:46.144389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.086 [2024-11-17 19:26:46.144442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.086 [2024-11-17 19:26:46.144487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.086 [2024-11-17 19:26:46.144488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.086 19:26:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.086 19:26:46 -- common/autotest_common.sh@862 -- # return 0 00:17:48.086 19:26:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:48.086 19:26:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.086 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:17:48.086 19:26:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.086 19:26:46 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:48.086 19:26:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.086 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:17:48.086 19:26:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.086 19:26:46 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:48.086 19:26:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.086 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:17:48.086 19:26:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.086 19:26:46 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:48.086 19:26:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.086 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:17:48.086 [2024-11-17 19:26:46.312801] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.086 19:26:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.086 19:26:46 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:48.086 19:26:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.086 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:17:48.345 Malloc0 00:17:48.345 19:26:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:48.345 19:26:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.345 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:17:48.345 19:26:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:48.345 19:26:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.345 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:17:48.345 19:26:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.345 19:26:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.345 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:17:48.345 [2024-11-17 19:26:46.378317] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.345 19:26:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1198741 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@30 -- # READ_PID=1198744 00:17:48.345 19:26:46 -- nvmf/common.sh@520 -- # config=() 00:17:48.345 19:26:46 -- nvmf/common.sh@520 -- # local subsystem config 00:17:48.345 19:26:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:48.345 19:26:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:48.345 { 00:17:48.345 "params": { 00:17:48.345 "name": "Nvme$subsystem", 00:17:48.345 "trtype": "$TEST_TRANSPORT", 00:17:48.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.345 "adrfam": "ipv4", 00:17:48.345 "trsvcid": "$NVMF_PORT", 00:17:48.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.345 "hdgst": ${hdgst:-false}, 00:17:48.345 "ddgst": ${ddgst:-false} 00:17:48.345 }, 00:17:48.345 "method": "bdev_nvme_attach_controller" 00:17:48.345 } 00:17:48.345 EOF 00:17:48.345 )") 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1198747 00:17:48.345 19:26:46 -- nvmf/common.sh@520 -- # config=() 00:17:48.345 19:26:46 -- nvmf/common.sh@520 -- # local subsystem config 00:17:48.345 19:26:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:48.345 19:26:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:48.345 { 00:17:48.345 "params": { 00:17:48.345 "name": "Nvme$subsystem", 00:17:48.345 "trtype": "$TEST_TRANSPORT", 00:17:48.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.345 "adrfam": "ipv4", 00:17:48.345 "trsvcid": "$NVMF_PORT", 00:17:48.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.345 "hdgst": ${hdgst:-false}, 00:17:48.345 "ddgst": ${ddgst:-false} 00:17:48.345 }, 00:17:48.345 "method": "bdev_nvme_attach_controller" 00:17:48.345 } 00:17:48.345 EOF 00:17:48.345 )") 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:48.345 19:26:46 -- nvmf/common.sh@542 -- # cat 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1198751 00:17:48.345 19:26:46 -- nvmf/common.sh@520 -- # config=() 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@35 -- # sync 00:17:48.345 19:26:46 -- nvmf/common.sh@520 -- # local subsystem config 00:17:48.345 19:26:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:48.345 19:26:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:48.345 { 00:17:48.345 "params": { 00:17:48.345 "name": "Nvme$subsystem", 00:17:48.345 "trtype": "$TEST_TRANSPORT", 00:17:48.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.345 "adrfam": "ipv4", 00:17:48.345 "trsvcid": "$NVMF_PORT", 00:17:48.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.345 "hdgst": ${hdgst:-false}, 00:17:48.345 "ddgst": ${ddgst:-false} 00:17:48.345 }, 00:17:48.345 "method": "bdev_nvme_attach_controller" 00:17:48.345 } 00:17:48.345 EOF 00:17:48.345 )") 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:48.345 19:26:46 -- nvmf/common.sh@542 -- # cat 00:17:48.345 19:26:46 -- nvmf/common.sh@520 -- # config=() 00:17:48.345 19:26:46 -- nvmf/common.sh@520 -- # local subsystem config 00:17:48.345 19:26:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:48.345 19:26:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:48.345 { 00:17:48.345 "params": { 00:17:48.345 "name": "Nvme$subsystem", 00:17:48.345 "trtype": "$TEST_TRANSPORT", 00:17:48.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.345 "adrfam": "ipv4", 00:17:48.345 "trsvcid": "$NVMF_PORT", 00:17:48.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.345 "hdgst": ${hdgst:-false}, 00:17:48.345 "ddgst": ${ddgst:-false} 00:17:48.345 }, 00:17:48.345 "method": "bdev_nvme_attach_controller" 00:17:48.345 } 00:17:48.345 EOF 00:17:48.345 )") 00:17:48.345 19:26:46 -- nvmf/common.sh@542 -- # cat 00:17:48.345 19:26:46 -- target/bdev_io_wait.sh@37 -- # wait 1198741 00:17:48.345 19:26:46 -- nvmf/common.sh@542 -- # cat 00:17:48.345 19:26:46 -- nvmf/common.sh@544 -- # jq . 00:17:48.345 19:26:46 -- nvmf/common.sh@544 -- # jq . 00:17:48.345 19:26:46 -- nvmf/common.sh@544 -- # jq . 00:17:48.345 19:26:46 -- nvmf/common.sh@545 -- # IFS=, 00:17:48.345 19:26:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:48.345 "params": { 00:17:48.345 "name": "Nvme1", 00:17:48.345 "trtype": "tcp", 00:17:48.345 "traddr": "10.0.0.2", 00:17:48.345 "adrfam": "ipv4", 00:17:48.345 "trsvcid": "4420", 00:17:48.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.345 "hdgst": false, 00:17:48.345 "ddgst": false 00:17:48.345 }, 00:17:48.345 "method": "bdev_nvme_attach_controller" 00:17:48.345 }' 00:17:48.345 19:26:46 -- nvmf/common.sh@545 -- # IFS=, 00:17:48.346 19:26:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:48.346 "params": { 00:17:48.346 "name": "Nvme1", 00:17:48.346 "trtype": "tcp", 00:17:48.346 "traddr": "10.0.0.2", 00:17:48.346 "adrfam": "ipv4", 00:17:48.346 "trsvcid": "4420", 00:17:48.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.346 "hdgst": false, 00:17:48.346 "ddgst": false 00:17:48.346 }, 00:17:48.346 "method": "bdev_nvme_attach_controller" 00:17:48.346 }' 00:17:48.346 19:26:46 -- nvmf/common.sh@544 -- # jq . 00:17:48.346 19:26:46 -- nvmf/common.sh@545 -- # IFS=, 00:17:48.346 19:26:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:48.346 "params": { 00:17:48.346 "name": "Nvme1", 00:17:48.346 "trtype": "tcp", 00:17:48.346 "traddr": "10.0.0.2", 00:17:48.346 "adrfam": "ipv4", 00:17:48.346 "trsvcid": "4420", 00:17:48.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.346 "hdgst": false, 00:17:48.346 "ddgst": false 00:17:48.346 }, 00:17:48.346 "method": "bdev_nvme_attach_controller" 00:17:48.346 }' 00:17:48.346 19:26:46 -- nvmf/common.sh@545 -- # IFS=, 00:17:48.346 19:26:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:48.346 "params": { 00:17:48.346 "name": "Nvme1", 00:17:48.346 "trtype": "tcp", 00:17:48.346 "traddr": "10.0.0.2", 00:17:48.346 "adrfam": "ipv4", 00:17:48.346 "trsvcid": "4420", 00:17:48.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.346 "hdgst": false, 00:17:48.346 "ddgst": false 00:17:48.346 }, 00:17:48.346 "method": "bdev_nvme_attach_controller" 00:17:48.346 }' 00:17:48.346 [2024-11-17 19:26:46.422962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:48.346 [2024-11-17 19:26:46.422997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:48.346 [2024-11-17 19:26:46.422997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:48.346 [2024-11-17 19:26:46.423034] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:48.346 [2024-11-17 19:26:46.423076] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 19:26:46.423076] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:48.346 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:48.346 [2024-11-17 19:26:46.423347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:48.346 [2024-11-17 19:26:46.423404] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:48.346 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.346 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.346 [2024-11-17 19:26:46.596364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.604 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.604 [2024-11-17 19:26:46.670464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:48.604 [2024-11-17 19:26:46.699048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.604 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.604 [2024-11-17 19:26:46.772875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:48.604 [2024-11-17 19:26:46.799738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.604 [2024-11-17 19:26:46.866393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.604 [2024-11-17 19:26:46.868842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:48.863 [2024-11-17 19:26:46.932146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:48.863 Running I/O for 1 seconds... 00:17:48.863 Running I/O for 1 seconds... 00:17:48.863 Running I/O for 1 seconds... 00:17:49.126 Running I/O for 1 seconds... 00:17:50.101 00:17:50.101 Latency(us) 00:17:50.101 [2024-11-17T18:26:48.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.101 [2024-11-17T18:26:48.368Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:50.101 Nvme1n1 : 1.02 6740.63 26.33 0.00 0.00 18785.54 8592.50 27962.03 00:17:50.101 [2024-11-17T18:26:48.368Z] =================================================================================================================== 00:17:50.101 [2024-11-17T18:26:48.368Z] Total : 6740.63 26.33 0.00 0.00 18785.54 8592.50 27962.03 00:17:50.101 00:17:50.101 Latency(us) 00:17:50.101 [2024-11-17T18:26:48.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.101 [2024-11-17T18:26:48.368Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:50.101 Nvme1n1 : 1.01 9292.25 36.30 0.00 0.00 13708.25 8835.22 25826.04 00:17:50.101 [2024-11-17T18:26:48.368Z] =================================================================================================================== 00:17:50.101 [2024-11-17T18:26:48.368Z] Total : 9292.25 36.30 0.00 0.00 13708.25 8835.22 25826.04 00:17:50.101 00:17:50.102 Latency(us) 00:17:50.102 [2024-11-17T18:26:48.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.102 [2024-11-17T18:26:48.369Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:50.102 Nvme1n1 : 1.00 6871.16 26.84 0.00 0.00 18581.90 4296.25 44273.21 00:17:50.102 [2024-11-17T18:26:48.369Z] =================================================================================================================== 00:17:50.102 [2024-11-17T18:26:48.369Z] Total : 6871.16 26.84 0.00 0.00 18581.90 4296.25 44273.21 00:17:50.102 00:17:50.102 Latency(us) 00:17:50.102 [2024-11-17T18:26:48.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.102 [2024-11-17T18:26:48.369Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:50.102 Nvme1n1 : 1.00 189447.73 740.03 0.00 0.00 673.10 253.35 843.47 00:17:50.102 [2024-11-17T18:26:48.369Z] =================================================================================================================== 00:17:50.102 [2024-11-17T18:26:48.369Z] Total : 189447.73 740.03 0.00 0.00 673.10 253.35 843.47 00:17:50.102 19:26:48 -- target/bdev_io_wait.sh@38 -- # wait 1198744 00:17:50.102 19:26:48 -- target/bdev_io_wait.sh@39 -- # wait 1198747 00:17:50.360 19:26:48 -- target/bdev_io_wait.sh@40 -- # wait 1198751 00:17:50.360 19:26:48 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.360 19:26:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.360 19:26:48 -- common/autotest_common.sh@10 -- # set +x 00:17:50.360 19:26:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.360 19:26:48 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:50.360 19:26:48 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:50.360 19:26:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:50.360 19:26:48 -- nvmf/common.sh@116 -- # sync 00:17:50.360 19:26:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:50.360 19:26:48 -- nvmf/common.sh@119 -- # set +e 00:17:50.360 19:26:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:50.360 19:26:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:50.360 rmmod nvme_tcp 00:17:50.360 rmmod nvme_fabrics 00:17:50.360 rmmod nvme_keyring 00:17:50.360 19:26:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:50.360 19:26:48 -- nvmf/common.sh@123 -- # set -e 00:17:50.360 19:26:48 -- nvmf/common.sh@124 -- # return 0 00:17:50.360 19:26:48 -- nvmf/common.sh@477 -- # '[' -n 1198650 ']' 00:17:50.360 19:26:48 -- nvmf/common.sh@478 -- # killprocess 1198650 00:17:50.360 19:26:48 -- common/autotest_common.sh@936 -- # '[' -z 1198650 ']' 00:17:50.360 19:26:48 -- common/autotest_common.sh@940 -- # kill -0 1198650 00:17:50.360 19:26:48 -- common/autotest_common.sh@941 -- # uname 00:17:50.360 19:26:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.360 19:26:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1198650 00:17:50.360 19:26:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.360 19:26:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.360 19:26:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1198650' 00:17:50.360 killing process with pid 1198650 00:17:50.360 19:26:48 -- common/autotest_common.sh@955 -- # kill 1198650 00:17:50.360 19:26:48 -- common/autotest_common.sh@960 -- # wait 1198650 00:17:50.618 19:26:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:50.618 19:26:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:50.618 19:26:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:50.618 19:26:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.618 19:26:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:50.619 19:26:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.619 19:26:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.619 19:26:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.151 19:26:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:53.151 00:17:53.151 real 0m7.179s 00:17:53.151 user 0m16.426s 00:17:53.151 sys 0m3.385s 00:17:53.151 19:26:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:53.151 19:26:50 -- common/autotest_common.sh@10 -- # set +x 00:17:53.151 ************************************ 00:17:53.151 END TEST nvmf_bdev_io_wait 00:17:53.151 ************************************ 00:17:53.151 19:26:50 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:53.151 19:26:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:53.151 19:26:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.151 19:26:50 -- common/autotest_common.sh@10 -- # set +x 00:17:53.151 ************************************ 00:17:53.151 START TEST nvmf_queue_depth 00:17:53.151 ************************************ 00:17:53.151 19:26:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:53.151 * Looking for test storage... 00:17:53.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.151 19:26:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:53.151 19:26:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:53.151 19:26:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:53.151 19:26:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:53.151 19:26:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:53.151 19:26:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:53.151 19:26:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:53.151 19:26:51 -- scripts/common.sh@335 -- # IFS=.-: 00:17:53.151 19:26:51 -- scripts/common.sh@335 -- # read -ra ver1 00:17:53.151 19:26:51 -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.151 19:26:51 -- scripts/common.sh@336 -- # read -ra ver2 00:17:53.151 19:26:51 -- scripts/common.sh@337 -- # local 'op=<' 00:17:53.151 19:26:51 -- scripts/common.sh@339 -- # ver1_l=2 00:17:53.151 19:26:51 -- scripts/common.sh@340 -- # ver2_l=1 00:17:53.151 19:26:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:53.152 19:26:51 -- scripts/common.sh@343 -- # case "$op" in 00:17:53.152 19:26:51 -- scripts/common.sh@344 -- # : 1 00:17:53.152 19:26:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:53.152 19:26:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.152 19:26:51 -- scripts/common.sh@364 -- # decimal 1 00:17:53.152 19:26:51 -- scripts/common.sh@352 -- # local d=1 00:17:53.152 19:26:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.152 19:26:51 -- scripts/common.sh@354 -- # echo 1 00:17:53.152 19:26:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:53.152 19:26:51 -- scripts/common.sh@365 -- # decimal 2 00:17:53.152 19:26:51 -- scripts/common.sh@352 -- # local d=2 00:17:53.152 19:26:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.152 19:26:51 -- scripts/common.sh@354 -- # echo 2 00:17:53.152 19:26:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:53.152 19:26:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:53.152 19:26:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:53.152 19:26:51 -- scripts/common.sh@367 -- # return 0 00:17:53.152 19:26:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.152 19:26:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:53.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.152 --rc genhtml_branch_coverage=1 00:17:53.152 --rc genhtml_function_coverage=1 00:17:53.152 --rc genhtml_legend=1 00:17:53.152 --rc geninfo_all_blocks=1 00:17:53.152 --rc geninfo_unexecuted_blocks=1 00:17:53.152 00:17:53.152 ' 00:17:53.152 19:26:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:53.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.152 --rc genhtml_branch_coverage=1 00:17:53.152 --rc genhtml_function_coverage=1 00:17:53.152 --rc genhtml_legend=1 00:17:53.152 --rc geninfo_all_blocks=1 00:17:53.152 --rc geninfo_unexecuted_blocks=1 00:17:53.152 00:17:53.152 ' 00:17:53.152 19:26:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:53.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.152 --rc genhtml_branch_coverage=1 00:17:53.152 --rc genhtml_function_coverage=1 00:17:53.152 --rc genhtml_legend=1 00:17:53.152 --rc geninfo_all_blocks=1 00:17:53.152 --rc geninfo_unexecuted_blocks=1 00:17:53.152 00:17:53.152 ' 00:17:53.152 19:26:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:53.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.152 --rc genhtml_branch_coverage=1 00:17:53.152 --rc genhtml_function_coverage=1 00:17:53.152 --rc genhtml_legend=1 00:17:53.152 --rc geninfo_all_blocks=1 00:17:53.152 --rc geninfo_unexecuted_blocks=1 00:17:53.152 00:17:53.152 ' 00:17:53.152 19:26:51 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.152 19:26:51 -- nvmf/common.sh@7 -- # uname -s 00:17:53.152 19:26:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.152 19:26:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.152 19:26:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.152 19:26:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.152 19:26:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.152 19:26:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.152 19:26:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.152 19:26:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.152 19:26:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.152 19:26:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.152 19:26:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.152 19:26:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.152 19:26:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.152 19:26:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.152 19:26:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.152 19:26:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.152 19:26:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.152 19:26:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.152 19:26:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.152 19:26:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.152 19:26:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.152 19:26:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.152 19:26:51 -- paths/export.sh@5 -- # export PATH 00:17:53.152 19:26:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.152 19:26:51 -- nvmf/common.sh@46 -- # : 0 00:17:53.152 19:26:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:53.152 19:26:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:53.152 19:26:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:53.152 19:26:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.152 19:26:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.152 19:26:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:53.152 19:26:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:53.152 19:26:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:53.152 19:26:51 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:53.152 19:26:51 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:53.152 19:26:51 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.152 19:26:51 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:53.152 19:26:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:53.152 19:26:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.152 19:26:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:53.152 19:26:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:53.152 19:26:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:53.152 19:26:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.152 19:26:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.152 19:26:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.152 19:26:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:53.152 19:26:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:53.152 19:26:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:53.152 19:26:51 -- common/autotest_common.sh@10 -- # set +x 00:17:55.052 19:26:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:55.052 19:26:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:55.052 19:26:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:55.052 19:26:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:55.052 19:26:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:55.052 19:26:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:55.052 19:26:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:55.052 19:26:53 -- nvmf/common.sh@294 -- # net_devs=() 00:17:55.052 19:26:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:55.052 19:26:53 -- nvmf/common.sh@295 -- # e810=() 00:17:55.052 19:26:53 -- nvmf/common.sh@295 -- # local -ga e810 00:17:55.052 19:26:53 -- nvmf/common.sh@296 -- # x722=() 00:17:55.052 19:26:53 -- nvmf/common.sh@296 -- # local -ga x722 00:17:55.052 19:26:53 -- nvmf/common.sh@297 -- # mlx=() 00:17:55.052 19:26:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:55.052 19:26:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.052 19:26:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.053 19:26:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:55.053 19:26:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:55.053 19:26:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:55.053 19:26:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:55.053 19:26:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:55.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:55.053 19:26:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:55.053 19:26:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:55.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:55.053 19:26:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:55.053 19:26:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:55.053 19:26:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.053 19:26:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:55.053 19:26:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.053 19:26:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:55.053 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:55.053 19:26:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.053 19:26:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:55.053 19:26:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.053 19:26:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:55.053 19:26:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.053 19:26:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:55.053 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:55.053 19:26:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.053 19:26:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:55.053 19:26:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:55.053 19:26:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:55.053 19:26:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.053 19:26:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.053 19:26:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.053 19:26:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:55.053 19:26:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.053 19:26:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.053 19:26:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:55.053 19:26:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.053 19:26:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.053 19:26:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:55.053 19:26:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:55.053 19:26:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.053 19:26:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.053 19:26:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.053 19:26:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.053 19:26:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:55.053 19:26:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.053 19:26:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.053 19:26:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.053 19:26:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:55.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:17:55.053 00:17:55.053 --- 10.0.0.2 ping statistics --- 00:17:55.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.053 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:55.053 19:26:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:17:55.053 00:17:55.053 --- 10.0.0.1 ping statistics --- 00:17:55.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.053 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:55.053 19:26:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.053 19:26:53 -- nvmf/common.sh@410 -- # return 0 00:17:55.053 19:26:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:55.053 19:26:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.053 19:26:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:55.053 19:26:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.053 19:26:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:55.053 19:26:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:55.053 19:26:53 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:55.053 19:26:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:55.053 19:26:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.053 19:26:53 -- common/autotest_common.sh@10 -- # set +x 00:17:55.053 19:26:53 -- nvmf/common.sh@469 -- # nvmfpid=1201054 00:17:55.053 19:26:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:55.053 19:26:53 -- nvmf/common.sh@470 -- # waitforlisten 1201054 00:17:55.053 19:26:53 -- common/autotest_common.sh@829 -- # '[' -z 1201054 ']' 00:17:55.053 19:26:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.053 19:26:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.053 19:26:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.053 19:26:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.053 19:26:53 -- common/autotest_common.sh@10 -- # set +x 00:17:55.053 [2024-11-17 19:26:53.293905] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:55.053 [2024-11-17 19:26:53.293985] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.312 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.312 [2024-11-17 19:26:53.361505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.312 [2024-11-17 19:26:53.458037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:55.312 [2024-11-17 19:26:53.458186] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.312 [2024-11-17 19:26:53.458202] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.312 [2024-11-17 19:26:53.458214] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.312 [2024-11-17 19:26:53.458257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.245 19:26:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.245 19:26:54 -- common/autotest_common.sh@862 -- # return 0 00:17:56.245 19:26:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:56.245 19:26:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.245 19:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:56.245 19:26:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.245 19:26:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.245 19:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.245 19:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:56.245 [2024-11-17 19:26:54.334244] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.245 19:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.245 19:26:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:56.245 19:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.245 19:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:56.245 Malloc0 00:17:56.245 19:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.245 19:26:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.245 19:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.245 19:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:56.245 19:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.245 19:26:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.245 19:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.245 19:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:56.245 19:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.245 19:26:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.245 19:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.245 19:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:56.245 [2024-11-17 19:26:54.401290] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.245 19:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.246 19:26:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=1201218 00:17:56.246 19:26:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.246 19:26:54 -- target/queue_depth.sh@33 -- # waitforlisten 1201218 /var/tmp/bdevperf.sock 00:17:56.246 19:26:54 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:56.246 19:26:54 -- common/autotest_common.sh@829 -- # '[' -z 1201218 ']' 00:17:56.246 19:26:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.246 19:26:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.246 19:26:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.246 19:26:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.246 19:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:56.246 [2024-11-17 19:26:54.446436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:56.246 [2024-11-17 19:26:54.446523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201218 ] 00:17:56.246 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.246 [2024-11-17 19:26:54.510233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.504 [2024-11-17 19:26:54.600664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.437 19:26:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.437 19:26:55 -- common/autotest_common.sh@862 -- # return 0 00:17:57.437 19:26:55 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.437 19:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.437 19:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:57.437 NVMe0n1 00:17:57.437 19:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.437 19:26:55 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:57.437 Running I/O for 10 seconds... 00:18:09.640 00:18:09.640 Latency(us) 00:18:09.640 [2024-11-17T18:27:07.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.640 [2024-11-17T18:27:07.907Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:09.640 Verification LBA range: start 0x0 length 0x4000 00:18:09.640 NVMe0n1 : 10.11 12285.67 47.99 0.00 0.00 82692.91 14854.83 62137.84 00:18:09.640 [2024-11-17T18:27:07.907Z] =================================================================================================================== 00:18:09.640 [2024-11-17T18:27:07.907Z] Total : 12285.67 47.99 0.00 0.00 82692.91 14854.83 62137.84 00:18:09.640 0 00:18:09.640 19:27:05 -- target/queue_depth.sh@39 -- # killprocess 1201218 00:18:09.640 19:27:05 -- common/autotest_common.sh@936 -- # '[' -z 1201218 ']' 00:18:09.640 19:27:05 -- common/autotest_common.sh@940 -- # kill -0 1201218 00:18:09.640 19:27:05 -- common/autotest_common.sh@941 -- # uname 00:18:09.640 19:27:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.640 19:27:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1201218 00:18:09.640 19:27:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:09.640 19:27:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:09.640 19:27:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1201218' 00:18:09.640 killing process with pid 1201218 00:18:09.640 19:27:05 -- common/autotest_common.sh@955 -- # kill 1201218 00:18:09.640 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.640 00:18:09.640 Latency(us) 00:18:09.640 [2024-11-17T18:27:07.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.640 [2024-11-17T18:27:07.907Z] =================================================================================================================== 00:18:09.640 [2024-11-17T18:27:07.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.640 19:27:05 -- common/autotest_common.sh@960 -- # wait 1201218 00:18:09.640 19:27:06 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:09.640 19:27:06 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:09.640 19:27:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:09.640 19:27:06 -- nvmf/common.sh@116 -- # sync 00:18:09.640 19:27:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:09.640 19:27:06 -- nvmf/common.sh@119 -- # set +e 00:18:09.640 19:27:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:09.640 19:27:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:09.640 rmmod nvme_tcp 00:18:09.640 rmmod nvme_fabrics 00:18:09.640 rmmod nvme_keyring 00:18:09.640 19:27:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:09.640 19:27:06 -- nvmf/common.sh@123 -- # set -e 00:18:09.640 19:27:06 -- nvmf/common.sh@124 -- # return 0 00:18:09.640 19:27:06 -- nvmf/common.sh@477 -- # '[' -n 1201054 ']' 00:18:09.640 19:27:06 -- nvmf/common.sh@478 -- # killprocess 1201054 00:18:09.640 19:27:06 -- common/autotest_common.sh@936 -- # '[' -z 1201054 ']' 00:18:09.640 19:27:06 -- common/autotest_common.sh@940 -- # kill -0 1201054 00:18:09.640 19:27:06 -- common/autotest_common.sh@941 -- # uname 00:18:09.640 19:27:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.640 19:27:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1201054 00:18:09.640 19:27:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:09.640 19:27:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:09.640 19:27:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1201054' 00:18:09.640 killing process with pid 1201054 00:18:09.640 19:27:06 -- common/autotest_common.sh@955 -- # kill 1201054 00:18:09.640 19:27:06 -- common/autotest_common.sh@960 -- # wait 1201054 00:18:09.640 19:27:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:09.640 19:27:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:09.640 19:27:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:09.640 19:27:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.640 19:27:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:09.640 19:27:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.640 19:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.640 19:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.575 19:27:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:10.575 00:18:10.575 real 0m17.618s 00:18:10.575 user 0m25.197s 00:18:10.575 sys 0m3.188s 00:18:10.575 19:27:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:10.575 19:27:08 -- common/autotest_common.sh@10 -- # set +x 00:18:10.575 ************************************ 00:18:10.575 END TEST nvmf_queue_depth 00:18:10.575 ************************************ 00:18:10.575 19:27:08 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:10.575 19:27:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:10.575 19:27:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:10.575 19:27:08 -- common/autotest_common.sh@10 -- # set +x 00:18:10.575 ************************************ 00:18:10.575 START TEST nvmf_multipath 00:18:10.575 ************************************ 00:18:10.575 19:27:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:10.575 * Looking for test storage... 00:18:10.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.575 19:27:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:10.575 19:27:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:10.575 19:27:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:10.575 19:27:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:10.575 19:27:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:10.575 19:27:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:10.575 19:27:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:10.575 19:27:08 -- scripts/common.sh@335 -- # IFS=.-: 00:18:10.575 19:27:08 -- scripts/common.sh@335 -- # read -ra ver1 00:18:10.575 19:27:08 -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.575 19:27:08 -- scripts/common.sh@336 -- # read -ra ver2 00:18:10.575 19:27:08 -- scripts/common.sh@337 -- # local 'op=<' 00:18:10.575 19:27:08 -- scripts/common.sh@339 -- # ver1_l=2 00:18:10.575 19:27:08 -- scripts/common.sh@340 -- # ver2_l=1 00:18:10.575 19:27:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:10.575 19:27:08 -- scripts/common.sh@343 -- # case "$op" in 00:18:10.575 19:27:08 -- scripts/common.sh@344 -- # : 1 00:18:10.575 19:27:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:10.575 19:27:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.575 19:27:08 -- scripts/common.sh@364 -- # decimal 1 00:18:10.575 19:27:08 -- scripts/common.sh@352 -- # local d=1 00:18:10.575 19:27:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.575 19:27:08 -- scripts/common.sh@354 -- # echo 1 00:18:10.575 19:27:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:10.575 19:27:08 -- scripts/common.sh@365 -- # decimal 2 00:18:10.575 19:27:08 -- scripts/common.sh@352 -- # local d=2 00:18:10.575 19:27:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.575 19:27:08 -- scripts/common.sh@354 -- # echo 2 00:18:10.575 19:27:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:10.575 19:27:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:10.575 19:27:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:10.575 19:27:08 -- scripts/common.sh@367 -- # return 0 00:18:10.575 19:27:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.575 19:27:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:10.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.575 --rc genhtml_branch_coverage=1 00:18:10.575 --rc genhtml_function_coverage=1 00:18:10.575 --rc genhtml_legend=1 00:18:10.575 --rc geninfo_all_blocks=1 00:18:10.575 --rc geninfo_unexecuted_blocks=1 00:18:10.575 00:18:10.575 ' 00:18:10.575 19:27:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:10.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.575 --rc genhtml_branch_coverage=1 00:18:10.575 --rc genhtml_function_coverage=1 00:18:10.575 --rc genhtml_legend=1 00:18:10.575 --rc geninfo_all_blocks=1 00:18:10.575 --rc geninfo_unexecuted_blocks=1 00:18:10.575 00:18:10.575 ' 00:18:10.575 19:27:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:10.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.575 --rc genhtml_branch_coverage=1 00:18:10.575 --rc genhtml_function_coverage=1 00:18:10.575 --rc genhtml_legend=1 00:18:10.575 --rc geninfo_all_blocks=1 00:18:10.575 --rc geninfo_unexecuted_blocks=1 00:18:10.575 00:18:10.575 ' 00:18:10.575 19:27:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:10.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.575 --rc genhtml_branch_coverage=1 00:18:10.575 --rc genhtml_function_coverage=1 00:18:10.575 --rc genhtml_legend=1 00:18:10.575 --rc geninfo_all_blocks=1 00:18:10.575 --rc geninfo_unexecuted_blocks=1 00:18:10.575 00:18:10.575 ' 00:18:10.575 19:27:08 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.575 19:27:08 -- nvmf/common.sh@7 -- # uname -s 00:18:10.575 19:27:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.575 19:27:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.575 19:27:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.575 19:27:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.575 19:27:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.575 19:27:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.575 19:27:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.575 19:27:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.575 19:27:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.575 19:27:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.576 19:27:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.576 19:27:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.576 19:27:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.576 19:27:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.576 19:27:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.576 19:27:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.576 19:27:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.576 19:27:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.576 19:27:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.576 19:27:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.576 19:27:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.576 19:27:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.576 19:27:08 -- paths/export.sh@5 -- # export PATH 00:18:10.576 19:27:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.576 19:27:08 -- nvmf/common.sh@46 -- # : 0 00:18:10.576 19:27:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:10.576 19:27:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:10.576 19:27:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:10.576 19:27:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.576 19:27:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.576 19:27:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:10.576 19:27:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:10.576 19:27:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:10.576 19:27:08 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.576 19:27:08 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.576 19:27:08 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:10.576 19:27:08 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.576 19:27:08 -- target/multipath.sh@43 -- # nvmftestinit 00:18:10.576 19:27:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:10.576 19:27:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.576 19:27:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:10.576 19:27:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:10.576 19:27:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:10.576 19:27:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.576 19:27:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.576 19:27:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.576 19:27:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:10.576 19:27:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:10.576 19:27:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:10.576 19:27:08 -- common/autotest_common.sh@10 -- # set +x 00:18:12.488 19:27:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:12.488 19:27:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:12.488 19:27:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:12.488 19:27:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:12.488 19:27:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:12.488 19:27:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:12.488 19:27:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:12.488 19:27:10 -- nvmf/common.sh@294 -- # net_devs=() 00:18:12.488 19:27:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:12.488 19:27:10 -- nvmf/common.sh@295 -- # e810=() 00:18:12.488 19:27:10 -- nvmf/common.sh@295 -- # local -ga e810 00:18:12.488 19:27:10 -- nvmf/common.sh@296 -- # x722=() 00:18:12.488 19:27:10 -- nvmf/common.sh@296 -- # local -ga x722 00:18:12.488 19:27:10 -- nvmf/common.sh@297 -- # mlx=() 00:18:12.488 19:27:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:12.488 19:27:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.488 19:27:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:12.488 19:27:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:12.488 19:27:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:12.488 19:27:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:12.488 19:27:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:12.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:12.488 19:27:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:12.488 19:27:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:12.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:12.488 19:27:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:12.488 19:27:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:12.488 19:27:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.488 19:27:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:12.488 19:27:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.488 19:27:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:12.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:12.488 19:27:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.488 19:27:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:12.488 19:27:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.488 19:27:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:12.488 19:27:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.488 19:27:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:12.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:12.488 19:27:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.488 19:27:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:12.488 19:27:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:12.488 19:27:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:12.488 19:27:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:12.488 19:27:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.488 19:27:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.488 19:27:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.488 19:27:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:12.488 19:27:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.488 19:27:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.488 19:27:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:12.488 19:27:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.488 19:27:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.488 19:27:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:12.488 19:27:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:12.488 19:27:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.488 19:27:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.747 19:27:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.747 19:27:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.747 19:27:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:12.747 19:27:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.747 19:27:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.747 19:27:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.747 19:27:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:12.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:18:12.747 00:18:12.747 --- 10.0.0.2 ping statistics --- 00:18:12.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.747 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:18:12.747 19:27:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:18:12.747 00:18:12.747 --- 10.0.0.1 ping statistics --- 00:18:12.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.747 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:18:12.747 19:27:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.747 19:27:10 -- nvmf/common.sh@410 -- # return 0 00:18:12.747 19:27:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:12.747 19:27:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.747 19:27:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:12.747 19:27:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:12.747 19:27:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.747 19:27:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:12.747 19:27:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:12.747 19:27:10 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:12.747 19:27:10 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:12.747 only one NIC for nvmf test 00:18:12.747 19:27:10 -- target/multipath.sh@47 -- # nvmftestfini 00:18:12.747 19:27:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:12.747 19:27:10 -- nvmf/common.sh@116 -- # sync 00:18:12.747 19:27:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:12.747 19:27:10 -- nvmf/common.sh@119 -- # set +e 00:18:12.747 19:27:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:12.747 19:27:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:12.747 rmmod nvme_tcp 00:18:12.747 rmmod nvme_fabrics 00:18:12.747 rmmod nvme_keyring 00:18:12.747 19:27:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:12.747 19:27:10 -- nvmf/common.sh@123 -- # set -e 00:18:12.747 19:27:10 -- nvmf/common.sh@124 -- # return 0 00:18:12.747 19:27:10 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:12.747 19:27:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:12.747 19:27:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:12.747 19:27:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:12.747 19:27:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.747 19:27:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:12.747 19:27:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.747 19:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.747 19:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.284 19:27:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:15.284 19:27:12 -- target/multipath.sh@48 -- # exit 0 00:18:15.284 19:27:12 -- target/multipath.sh@1 -- # nvmftestfini 00:18:15.284 19:27:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:15.284 19:27:12 -- nvmf/common.sh@116 -- # sync 00:18:15.284 19:27:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:15.284 19:27:12 -- nvmf/common.sh@119 -- # set +e 00:18:15.284 19:27:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:15.284 19:27:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:15.284 19:27:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:15.284 19:27:12 -- nvmf/common.sh@123 -- # set -e 00:18:15.284 19:27:12 -- nvmf/common.sh@124 -- # return 0 00:18:15.284 19:27:12 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:15.284 19:27:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:15.284 19:27:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:15.284 19:27:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:15.284 19:27:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.284 19:27:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:15.284 19:27:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.284 19:27:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.284 19:27:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.284 19:27:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:15.284 00:18:15.284 real 0m4.458s 00:18:15.284 user 0m0.956s 00:18:15.284 sys 0m1.511s 00:18:15.284 19:27:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:15.284 19:27:12 -- common/autotest_common.sh@10 -- # set +x 00:18:15.284 ************************************ 00:18:15.284 END TEST nvmf_multipath 00:18:15.284 ************************************ 00:18:15.284 19:27:12 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:15.284 19:27:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:15.284 19:27:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.284 19:27:12 -- common/autotest_common.sh@10 -- # set +x 00:18:15.284 ************************************ 00:18:15.284 START TEST nvmf_zcopy 00:18:15.284 ************************************ 00:18:15.284 19:27:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:15.284 * Looking for test storage... 00:18:15.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.284 19:27:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:15.284 19:27:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:15.284 19:27:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:15.284 19:27:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:15.284 19:27:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:15.284 19:27:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:15.284 19:27:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:15.284 19:27:13 -- scripts/common.sh@335 -- # IFS=.-: 00:18:15.284 19:27:13 -- scripts/common.sh@335 -- # read -ra ver1 00:18:15.284 19:27:13 -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.284 19:27:13 -- scripts/common.sh@336 -- # read -ra ver2 00:18:15.284 19:27:13 -- scripts/common.sh@337 -- # local 'op=<' 00:18:15.284 19:27:13 -- scripts/common.sh@339 -- # ver1_l=2 00:18:15.284 19:27:13 -- scripts/common.sh@340 -- # ver2_l=1 00:18:15.284 19:27:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:15.284 19:27:13 -- scripts/common.sh@343 -- # case "$op" in 00:18:15.284 19:27:13 -- scripts/common.sh@344 -- # : 1 00:18:15.284 19:27:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:15.284 19:27:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.284 19:27:13 -- scripts/common.sh@364 -- # decimal 1 00:18:15.284 19:27:13 -- scripts/common.sh@352 -- # local d=1 00:18:15.284 19:27:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.284 19:27:13 -- scripts/common.sh@354 -- # echo 1 00:18:15.284 19:27:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:15.284 19:27:13 -- scripts/common.sh@365 -- # decimal 2 00:18:15.284 19:27:13 -- scripts/common.sh@352 -- # local d=2 00:18:15.284 19:27:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.284 19:27:13 -- scripts/common.sh@354 -- # echo 2 00:18:15.284 19:27:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:15.284 19:27:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:15.284 19:27:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:15.284 19:27:13 -- scripts/common.sh@367 -- # return 0 00:18:15.284 19:27:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.284 19:27:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:15.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.284 --rc genhtml_branch_coverage=1 00:18:15.284 --rc genhtml_function_coverage=1 00:18:15.284 --rc genhtml_legend=1 00:18:15.284 --rc geninfo_all_blocks=1 00:18:15.284 --rc geninfo_unexecuted_blocks=1 00:18:15.284 00:18:15.284 ' 00:18:15.284 19:27:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:15.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.284 --rc genhtml_branch_coverage=1 00:18:15.284 --rc genhtml_function_coverage=1 00:18:15.284 --rc genhtml_legend=1 00:18:15.284 --rc geninfo_all_blocks=1 00:18:15.284 --rc geninfo_unexecuted_blocks=1 00:18:15.284 00:18:15.284 ' 00:18:15.284 19:27:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:15.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.284 --rc genhtml_branch_coverage=1 00:18:15.284 --rc genhtml_function_coverage=1 00:18:15.284 --rc genhtml_legend=1 00:18:15.284 --rc geninfo_all_blocks=1 00:18:15.284 --rc geninfo_unexecuted_blocks=1 00:18:15.284 00:18:15.284 ' 00:18:15.284 19:27:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:15.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.284 --rc genhtml_branch_coverage=1 00:18:15.284 --rc genhtml_function_coverage=1 00:18:15.284 --rc genhtml_legend=1 00:18:15.284 --rc geninfo_all_blocks=1 00:18:15.284 --rc geninfo_unexecuted_blocks=1 00:18:15.284 00:18:15.284 ' 00:18:15.284 19:27:13 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.284 19:27:13 -- nvmf/common.sh@7 -- # uname -s 00:18:15.284 19:27:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.284 19:27:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.284 19:27:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.284 19:27:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.284 19:27:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.284 19:27:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.284 19:27:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.284 19:27:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.284 19:27:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.284 19:27:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.284 19:27:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.284 19:27:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.284 19:27:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.284 19:27:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.284 19:27:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.284 19:27:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.284 19:27:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.284 19:27:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.284 19:27:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.284 19:27:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.285 19:27:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.285 19:27:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.285 19:27:13 -- paths/export.sh@5 -- # export PATH 00:18:15.285 19:27:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.285 19:27:13 -- nvmf/common.sh@46 -- # : 0 00:18:15.285 19:27:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:15.285 19:27:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:15.285 19:27:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:15.285 19:27:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.285 19:27:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.285 19:27:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:15.285 19:27:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:15.285 19:27:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:15.285 19:27:13 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:15.285 19:27:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:15.285 19:27:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.285 19:27:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:15.285 19:27:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:15.285 19:27:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:15.285 19:27:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.285 19:27:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.285 19:27:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.285 19:27:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:15.285 19:27:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:15.285 19:27:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:15.285 19:27:13 -- common/autotest_common.sh@10 -- # set +x 00:18:17.190 19:27:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:17.190 19:27:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:17.190 19:27:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:17.190 19:27:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:17.190 19:27:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:17.190 19:27:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:17.190 19:27:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:17.190 19:27:15 -- nvmf/common.sh@294 -- # net_devs=() 00:18:17.190 19:27:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:17.190 19:27:15 -- nvmf/common.sh@295 -- # e810=() 00:18:17.190 19:27:15 -- nvmf/common.sh@295 -- # local -ga e810 00:18:17.190 19:27:15 -- nvmf/common.sh@296 -- # x722=() 00:18:17.190 19:27:15 -- nvmf/common.sh@296 -- # local -ga x722 00:18:17.190 19:27:15 -- nvmf/common.sh@297 -- # mlx=() 00:18:17.190 19:27:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:17.190 19:27:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.190 19:27:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:17.190 19:27:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:17.190 19:27:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:17.190 19:27:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:17.190 19:27:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:17.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:17.190 19:27:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:17.190 19:27:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:17.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:17.190 19:27:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:17.190 19:27:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:17.190 19:27:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.190 19:27:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:17.190 19:27:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.190 19:27:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:17.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:17.190 19:27:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.190 19:27:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:17.190 19:27:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.190 19:27:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:17.190 19:27:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.190 19:27:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:17.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:17.190 19:27:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.190 19:27:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:17.190 19:27:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:17.190 19:27:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:17.190 19:27:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.190 19:27:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.190 19:27:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.190 19:27:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:17.190 19:27:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.190 19:27:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.190 19:27:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:17.190 19:27:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.190 19:27:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.190 19:27:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:17.190 19:27:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:17.190 19:27:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.190 19:27:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.190 19:27:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.190 19:27:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.190 19:27:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:17.190 19:27:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.190 19:27:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.190 19:27:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.190 19:27:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:17.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:18:17.190 00:18:17.190 --- 10.0.0.2 ping statistics --- 00:18:17.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.190 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:17.190 19:27:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:18:17.190 00:18:17.190 --- 10.0.0.1 ping statistics --- 00:18:17.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.190 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:17.190 19:27:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.190 19:27:15 -- nvmf/common.sh@410 -- # return 0 00:18:17.190 19:27:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:17.190 19:27:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.190 19:27:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:17.190 19:27:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.190 19:27:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:17.190 19:27:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:17.190 19:27:15 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:17.191 19:27:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:17.191 19:27:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.191 19:27:15 -- common/autotest_common.sh@10 -- # set +x 00:18:17.191 19:27:15 -- nvmf/common.sh@469 -- # nvmfpid=1207087 00:18:17.191 19:27:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.191 19:27:15 -- nvmf/common.sh@470 -- # waitforlisten 1207087 00:18:17.191 19:27:15 -- common/autotest_common.sh@829 -- # '[' -z 1207087 ']' 00:18:17.191 19:27:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.191 19:27:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.191 19:27:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.191 19:27:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.191 19:27:15 -- common/autotest_common.sh@10 -- # set +x 00:18:17.191 [2024-11-17 19:27:15.372810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:17.191 [2024-11-17 19:27:15.372895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.191 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.191 [2024-11-17 19:27:15.434072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.451 [2024-11-17 19:27:15.523094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:17.451 [2024-11-17 19:27:15.523247] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.451 [2024-11-17 19:27:15.523263] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.451 [2024-11-17 19:27:15.523276] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.451 [2024-11-17 19:27:15.523314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.387 19:27:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.387 19:27:16 -- common/autotest_common.sh@862 -- # return 0 00:18:18.387 19:27:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:18.387 19:27:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.387 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.387 19:27:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.387 19:27:16 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:18.387 19:27:16 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:18.387 19:27:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.387 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.387 [2024-11-17 19:27:16.394578] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.387 19:27:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.387 19:27:16 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:18.387 19:27:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.387 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.387 19:27:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.387 19:27:16 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.387 19:27:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.387 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.387 [2024-11-17 19:27:16.410752] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.387 19:27:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.387 19:27:16 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:18.387 19:27:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.387 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.387 19:27:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.387 19:27:16 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:18.387 19:27:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.387 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.387 malloc0 00:18:18.387 19:27:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.387 19:27:16 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:18.387 19:27:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.387 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.387 19:27:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.387 19:27:16 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:18.387 19:27:16 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:18.387 19:27:16 -- nvmf/common.sh@520 -- # config=() 00:18:18.387 19:27:16 -- nvmf/common.sh@520 -- # local subsystem config 00:18:18.387 19:27:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:18.387 19:27:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:18.387 { 00:18:18.387 "params": { 00:18:18.387 "name": "Nvme$subsystem", 00:18:18.387 "trtype": "$TEST_TRANSPORT", 00:18:18.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.387 "adrfam": "ipv4", 00:18:18.387 "trsvcid": "$NVMF_PORT", 00:18:18.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.387 "hdgst": ${hdgst:-false}, 00:18:18.387 "ddgst": ${ddgst:-false} 00:18:18.387 }, 00:18:18.387 "method": "bdev_nvme_attach_controller" 00:18:18.387 } 00:18:18.387 EOF 00:18:18.387 )") 00:18:18.387 19:27:16 -- nvmf/common.sh@542 -- # cat 00:18:18.387 19:27:16 -- nvmf/common.sh@544 -- # jq . 00:18:18.387 19:27:16 -- nvmf/common.sh@545 -- # IFS=, 00:18:18.387 19:27:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:18.387 "params": { 00:18:18.387 "name": "Nvme1", 00:18:18.387 "trtype": "tcp", 00:18:18.387 "traddr": "10.0.0.2", 00:18:18.387 "adrfam": "ipv4", 00:18:18.387 "trsvcid": "4420", 00:18:18.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.387 "hdgst": false, 00:18:18.387 "ddgst": false 00:18:18.387 }, 00:18:18.387 "method": "bdev_nvme_attach_controller" 00:18:18.387 }' 00:18:18.387 [2024-11-17 19:27:16.482158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:18.387 [2024-11-17 19:27:16.482252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207245 ] 00:18:18.387 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.387 [2024-11-17 19:27:16.545407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.387 [2024-11-17 19:27:16.636770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.646 Running I/O for 10 seconds... 00:18:30.868 00:18:30.868 Latency(us) 00:18:30.868 [2024-11-17T18:27:29.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.868 [2024-11-17T18:27:29.135Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:30.868 Verification LBA range: start 0x0 length 0x1000 00:18:30.868 Nvme1n1 : 10.01 8464.03 66.13 0.00 0.00 15086.98 1231.83 23301.69 00:18:30.868 [2024-11-17T18:27:29.135Z] =================================================================================================================== 00:18:30.868 [2024-11-17T18:27:29.135Z] Total : 8464.03 66.13 0.00 0.00 15086.98 1231.83 23301.69 00:18:30.868 19:27:27 -- target/zcopy.sh@39 -- # perfpid=1208530 00:18:30.868 19:27:27 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:30.868 19:27:27 -- common/autotest_common.sh@10 -- # set +x 00:18:30.868 19:27:27 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:30.868 19:27:27 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:30.868 19:27:27 -- nvmf/common.sh@520 -- # config=() 00:18:30.868 19:27:27 -- nvmf/common.sh@520 -- # local subsystem config 00:18:30.868 19:27:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:30.868 19:27:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:30.868 { 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme$subsystem", 00:18:30.868 "trtype": "$TEST_TRANSPORT", 00:18:30.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "$NVMF_PORT", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.868 "hdgst": ${hdgst:-false}, 00:18:30.868 "ddgst": ${ddgst:-false} 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 } 00:18:30.868 EOF 00:18:30.868 )") 00:18:30.868 [2024-11-17 19:27:27.171602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.171652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 19:27:27 -- nvmf/common.sh@542 -- # cat 00:18:30.868 19:27:27 -- nvmf/common.sh@544 -- # jq . 00:18:30.868 19:27:27 -- nvmf/common.sh@545 -- # IFS=, 00:18:30.868 19:27:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:30.868 "params": { 00:18:30.868 "name": "Nvme1", 00:18:30.868 "trtype": "tcp", 00:18:30.868 "traddr": "10.0.0.2", 00:18:30.868 "adrfam": "ipv4", 00:18:30.868 "trsvcid": "4420", 00:18:30.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.868 "hdgst": false, 00:18:30.868 "ddgst": false 00:18:30.868 }, 00:18:30.868 "method": "bdev_nvme_attach_controller" 00:18:30.868 }' 00:18:30.868 [2024-11-17 19:27:27.179548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.179575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.187569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.187595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.195582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.195605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.203601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.203622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.209498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:30.868 [2024-11-17 19:27:27.209556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208530 ] 00:18:30.868 [2024-11-17 19:27:27.211617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.211638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.219642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.219684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.227693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.227716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.235708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.235731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.868 [2024-11-17 19:27:27.243744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.243766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.251771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.251794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.259778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.259801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.267800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.267825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.275076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.868 [2024-11-17 19:27:27.275808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.275829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.283872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.283911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.291886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.291923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.299876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.299898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.307899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.307920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.315918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.315967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.323963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.323985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.332011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.332060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.340004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.340046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.348037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.348063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.356058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.356083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.364074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.364099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.368051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.868 [2024-11-17 19:27:27.372107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.372133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.868 [2024-11-17 19:27:27.380132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.868 [2024-11-17 19:27:27.380156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.388193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.388232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.396214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.396255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.404234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.404276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.412257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.412300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.420279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.420320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.428304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.428346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.436288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.436316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.444342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.444382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.452370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.452413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.460354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.460379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.468369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.468391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.476417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.476446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.484439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.484468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.492463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.492490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.500485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.500513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.508505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.508531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.516527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.516552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.524548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.524573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.532571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.532595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.540604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.540631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.548626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.548655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.556664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.556715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.564685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.564712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.612908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.612935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.620843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.620867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 Running I/O for 5 seconds... 00:18:30.869 [2024-11-17 19:27:27.628864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.628886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.643387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.643419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.654622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.654653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.667963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.668010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.678759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.678787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.690624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.690656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.702202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.702233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.713251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.713282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.724303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.724334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.735492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.735523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.747143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.747174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.758670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.758724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.770107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.770138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.780565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.780596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.791506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.791537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.803026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.803056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.816148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.816179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.826558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.826587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.837599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.837630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.850616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.850648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.861335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.861366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.872299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.872330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.885376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.885407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.895930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.895958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.906880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.906916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.918080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.918110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.929244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.929275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.869 [2024-11-17 19:27:27.940627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.869 [2024-11-17 19:27:27.940658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:27.951876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:27.951904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:27.962646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:27.962688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:27.973813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:27.973841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:27.985065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:27.985097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:27.998329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:27.998361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.008418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.008449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.019657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.019698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.032582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.032613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.042824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.042852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.054082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.054113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.066987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.067030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.077187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.077219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.088054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.088085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.099171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.099203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.110137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.110168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.123532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.123570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.134246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.134277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.145404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.145436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.158344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.158374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.167826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.167856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.178652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.178692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.189656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.189695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.200640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.200671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.211568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.211598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.222631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.222662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.235381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.235413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.246269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.246300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.257376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.257408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.269750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.269782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.279696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.279728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.291729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.291760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.303227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.303261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.314559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.314591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.327819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.327850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.338339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.338380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.349163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.349204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.361819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.361849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.372401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.372433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.383502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.383533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.394641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.394681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.405891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.405922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.417285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.417317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.428176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.428206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.441539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.441569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.452242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.452273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.463306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.463337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.476018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.476050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.485941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.485972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.496976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.497007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.510061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.510092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.519715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.519746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.531601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.531631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.544738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.870 [2024-11-17 19:27:28.544769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.870 [2024-11-17 19:27:28.555057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.555096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.565770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.565801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.579078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.579108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.589706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.589736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.600569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.600598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.611609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.611640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.622944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.622974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.633750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.633791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.644683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.644714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.655537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.655567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.666164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.666195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.677170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.677201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.688290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.688320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.699644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.699683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.710942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.710973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.721745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.721786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.732685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.732716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.743988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.744018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.755212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.755242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.767935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.767979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.778245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.778275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.789330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.789360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.800253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.800283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.811087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.811116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.821903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.821933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.832798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.832829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.843773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.843805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.854763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.854794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.867612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.867643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.876817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.876848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.888586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.888617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.899631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.899662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.910956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.910996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.924169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.924200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.934803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.934834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.946178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.946209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.959516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.959547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.970508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.970538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.981493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.981523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:28.992655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:28.992693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.003771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.003802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.017192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.017223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.028131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.028162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.039074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.039104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.050499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.050530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.061612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.061642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.072377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.072407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.083739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.083770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.094773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.094803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.106197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.106227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.117322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.117352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.871 [2024-11-17 19:27:29.128378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.871 [2024-11-17 19:27:29.128408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.139143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.139174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.150363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.150394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.163259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.163289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.173506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.173536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.184207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.184237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.197068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.197100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.206348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.206378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.218182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.218213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.231257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.231288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.241568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.241599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.252171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.252202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.263274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.263305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.274471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.274501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.287552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.287583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.298003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.298033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.308700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.308731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.319316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.319346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.330258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.330289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.341322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.341352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.354030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.354062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.364155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.364186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.375647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.375688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.130 [2024-11-17 19:27:29.387174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.130 [2024-11-17 19:27:29.387205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-11-17 19:27:29.397954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-11-17 19:27:29.397991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-11-17 19:27:29.409457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-11-17 19:27:29.409488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-11-17 19:27:29.421000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-11-17 19:27:29.421034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-11-17 19:27:29.431993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-11-17 19:27:29.432023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-11-17 19:27:29.443658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-11-17 19:27:29.443698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-11-17 19:27:29.454801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-11-17 19:27:29.454831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.389 [2024-11-17 19:27:29.467879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.389 [2024-11-17 19:27:29.467910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.478254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.478284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.489376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.489407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.502522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.502553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.512880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.512910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.523257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.523288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.534259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.534289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.545562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.545591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.556661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.556702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.568025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.568056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.578832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.578861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.589966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.589996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.601045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.601075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.613872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.613903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.625869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.625899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.635417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.635446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.390 [2024-11-17 19:27:29.647308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.390 [2024-11-17 19:27:29.647338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.658644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.658690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.669684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.669714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.680732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.680762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.691974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.692004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.702866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.702896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.713874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.713904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.724774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.724804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.737701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.737730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.748351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.748381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.760369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.760398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.770956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.770986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.781494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.781525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.792770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.792800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.805928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.805957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.816032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.816062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.826592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.826631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.837244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.837274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.848295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.848326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.859308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.859338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.872141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.872171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.882346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.882376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.893356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.893386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.906168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.906198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.650 [2024-11-17 19:27:29.916226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.650 [2024-11-17 19:27:29.916256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:29.927573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:29.927604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:29.940391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:29.940420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:29.950375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:29.950408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:29.961514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:29.961544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:29.972294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:29.972325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:29.983379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:29.983409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:29.996293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:29.996323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:30.009896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:30.009939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:30.022143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:30.022177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:30.035192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:30.035224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.911 [2024-11-17 19:27:30.045732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.911 [2024-11-17 19:27:30.045775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.057725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.057756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.069052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.069082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.079931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.079961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.091313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.091343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.104359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.104389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.114405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.114434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.125725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.125754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.138777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.138807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.149331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.149360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.160151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.160182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.912 [2024-11-17 19:27:30.173427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.912 [2024-11-17 19:27:30.173457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.183960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.183991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.195287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.195316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.205989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.206019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.216868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.216898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.227834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.227864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.239167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.239196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.250380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.250410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.263344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.263382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.273661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.273702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.284662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.284702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.295392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.295421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.306386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.306415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.317449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.317478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.328588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.328618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.341363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.341393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.351353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.351382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.362341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.362371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.373463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.373492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.384591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.384621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.172 [2024-11-17 19:27:30.397445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.172 [2024-11-17 19:27:30.397475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.173 [2024-11-17 19:27:30.407171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.173 [2024-11-17 19:27:30.407200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.173 [2024-11-17 19:27:30.418814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.173 [2024-11-17 19:27:30.418844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.173 [2024-11-17 19:27:30.431612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.173 [2024-11-17 19:27:30.431642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.441878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.441909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.452481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.452510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.463894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.463924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.474937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.474976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.485726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.485756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.496948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.496978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.509981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.510012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.520443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.520473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.531296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.531327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.542448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.542478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.553553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.553583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.564950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.564991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.576212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.576242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.589267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.589308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.599704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.599735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.610825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.610856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.622279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.622310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.633559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.633590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.644777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.644808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.655721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.655751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.666777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.666808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.680135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.680164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.431 [2024-11-17 19:27:30.691347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.431 [2024-11-17 19:27:30.691378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.689 [2024-11-17 19:27:30.702848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.689 [2024-11-17 19:27:30.702879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.689 [2024-11-17 19:27:30.713409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.689 [2024-11-17 19:27:30.713439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.689 [2024-11-17 19:27:30.724256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.689 [2024-11-17 19:27:30.724286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.689 [2024-11-17 19:27:30.735110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.735140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.746249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.746279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.757539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.757568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.768896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.768926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.780157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.780187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.791512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.791542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.802609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.802639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.813455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.813485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.824811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.824841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.836168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.836198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.849502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.849532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.860080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.860108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.871216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.871246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.884217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.884246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.894610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.894640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.905748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.905790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.918993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.919022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.929547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.929577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.940827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.940857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.690 [2024-11-17 19:27:30.954021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.690 [2024-11-17 19:27:30.954061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:30.964607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:30.964637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:30.975983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:30.976013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:30.988808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:30.988838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:30.998891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:30.998920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.010321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.010350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.023435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.023466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.033888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.033918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.044864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.044894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.055920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.055950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.067318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.067348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.078579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.078609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.091631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.091661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.102339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.102369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.113192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.113223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.124423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.124453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.135747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.135777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.148804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.148834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.159343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.159372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.170473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.170504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.183523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.183553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.194435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.194464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.949 [2024-11-17 19:27:31.205522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.949 [2024-11-17 19:27:31.205552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.218296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.218326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.228623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.228653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.239481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.239511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.252523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.252554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.263069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.263098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.273712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.273741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.284901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.284931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.295888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.295918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.308766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.308795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.318579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.318609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.329656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.329695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.342486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.342517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.352423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.352453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.363580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.363609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.374572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.374602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.385424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.385453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.398296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.398326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.408405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.408435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.419008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.419038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.429612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.429642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.440329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.440358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.451076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.451106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.462136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.209 [2024-11-17 19:27:31.462165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.209 [2024-11-17 19:27:31.472920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.210 [2024-11-17 19:27:31.472950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.486208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.486239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.497032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.497063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.507614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.507644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.518033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.518063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.528434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.528463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.539603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.539642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.550487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.550517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.561575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.561605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.572729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.572759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.583611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.583640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.596320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.596350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.606592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.606623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.617258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.617287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.628155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.628186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.639654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.639698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.650738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.650768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.663080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.663111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.673082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.673113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.684988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.685018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.697513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.697543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.468 [2024-11-17 19:27:31.707700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.468 [2024-11-17 19:27:31.707732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.469 [2024-11-17 19:27:31.718463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.469 [2024-11-17 19:27:31.718492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.469 [2024-11-17 19:27:31.731239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.469 [2024-11-17 19:27:31.731270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.728 [2024-11-17 19:27:31.741357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.728 [2024-11-17 19:27:31.741388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.728 [2024-11-17 19:27:31.752666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.728 [2024-11-17 19:27:31.752715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.728 [2024-11-17 19:27:31.763633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.728 [2024-11-17 19:27:31.763663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.728 [2024-11-17 19:27:31.774879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.774909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.785891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.785921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.797186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.797216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.808108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.808137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.819426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.819456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.830425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.830455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.843082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.843113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.852916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.852946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.864777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.864807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.875565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.875596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.886607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.886638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.899643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.899684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.910073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.910103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.921108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.921137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.933477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.933507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.944224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.944254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.955365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.955396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.966475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.966519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.977663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.977704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.729 [2024-11-17 19:27:31.988714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.729 [2024-11-17 19:27:31.988743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.000034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.000065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.011654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.011693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.022524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.022554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.033907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.033937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.045350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.045380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.056819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.056849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.069815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.069845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.080501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.080533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.091571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.091601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.104510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.104540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.115061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.115091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.126036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.126065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.136933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.136962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.147755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.147786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.158980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.159010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.170214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.170243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.183334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.183372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.194048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.194077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.205110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.205140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.215608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.215638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.226284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.226313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.237044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.237074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.988 [2024-11-17 19:27:32.247512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.988 [2024-11-17 19:27:32.247541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.258494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.258526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.269481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.269511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.280734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.280764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.293759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.293789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.304072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.304101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.316055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.316086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.327186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.327216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.338761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.338791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.349857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.349887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.363264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.363294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.373736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.373766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.384945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.384985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.397904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.397934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.408362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.408392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.419282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.419312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.430551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.430580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.441709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.441739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.452572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.452602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.463363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.463393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.476182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.476211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.486511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.486543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.497407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.497437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.247 [2024-11-17 19:27:32.510491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.247 [2024-11-17 19:27:32.510521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.521292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.521322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.532048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.532078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.544890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.544920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.555086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.555115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.565857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.565887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.579157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.579187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.589340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.589370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.600530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.600560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.611311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.611340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.622103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.622133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.634926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.634956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.643826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.643854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 00:18:34.507 Latency(us) 00:18:34.507 [2024-11-17T18:27:32.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.507 [2024-11-17T18:27:32.774Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:34.507 Nvme1n1 : 5.01 11486.59 89.74 0.00 0.00 11129.73 4757.43 22816.24 00:18:34.507 [2024-11-17T18:27:32.774Z] =================================================================================================================== 00:18:34.507 [2024-11-17T18:27:32.774Z] Total : 11486.59 89.74 0.00 0.00 11129.73 4757.43 22816.24 00:18:34.507 [2024-11-17 19:27:32.650607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.650635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.658623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.658651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.666672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.666718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.682797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.682865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.690784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.690833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.698796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.698844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.706819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.706868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.714843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.714892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.722856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.507 [2024-11-17 19:27:32.722905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.507 [2024-11-17 19:27:32.730876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.508 [2024-11-17 19:27:32.730923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.508 [2024-11-17 19:27:32.738906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.508 [2024-11-17 19:27:32.738956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.508 [2024-11-17 19:27:32.746933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.508 [2024-11-17 19:27:32.746983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.508 [2024-11-17 19:27:32.754954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.508 [2024-11-17 19:27:32.755001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.508 [2024-11-17 19:27:32.762974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.508 [2024-11-17 19:27:32.763023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.508 [2024-11-17 19:27:32.770992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.508 [2024-11-17 19:27:32.771040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.779021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.779070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.787033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.787081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.795014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.795045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.803028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.803056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.811099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.811148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.819129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.819181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.827132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.827176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.835113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.835138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.843178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.843223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.851206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.851255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.859220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.859260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.867198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.867221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 [2024-11-17 19:27:32.875222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.768 [2024-11-17 19:27:32.875246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1208530) - No such process 00:18:34.768 19:27:32 -- target/zcopy.sh@49 -- # wait 1208530 00:18:34.768 19:27:32 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:34.768 19:27:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.768 19:27:32 -- common/autotest_common.sh@10 -- # set +x 00:18:34.768 19:27:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.768 19:27:32 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:34.768 19:27:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.768 19:27:32 -- common/autotest_common.sh@10 -- # set +x 00:18:34.768 delay0 00:18:34.768 19:27:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.768 19:27:32 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:34.768 19:27:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.768 19:27:32 -- common/autotest_common.sh@10 -- # set +x 00:18:34.768 19:27:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.768 19:27:32 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:34.768 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.768 [2024-11-17 19:27:32.951953] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:41.341 Initializing NVMe Controllers 00:18:41.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:41.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:41.341 Initialization complete. Launching workers. 00:18:41.341 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 241 00:18:41.341 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 528, failed to submit 33 00:18:41.341 success 408, unsuccess 120, failed 0 00:18:41.341 19:27:39 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:41.341 19:27:39 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:41.341 19:27:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:41.341 19:27:39 -- nvmf/common.sh@116 -- # sync 00:18:41.341 19:27:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:41.341 19:27:39 -- nvmf/common.sh@119 -- # set +e 00:18:41.341 19:27:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:41.341 19:27:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:41.341 rmmod nvme_tcp 00:18:41.341 rmmod nvme_fabrics 00:18:41.341 rmmod nvme_keyring 00:18:41.341 19:27:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:41.341 19:27:39 -- nvmf/common.sh@123 -- # set -e 00:18:41.341 19:27:39 -- nvmf/common.sh@124 -- # return 0 00:18:41.341 19:27:39 -- nvmf/common.sh@477 -- # '[' -n 1207087 ']' 00:18:41.341 19:27:39 -- nvmf/common.sh@478 -- # killprocess 1207087 00:18:41.341 19:27:39 -- common/autotest_common.sh@936 -- # '[' -z 1207087 ']' 00:18:41.341 19:27:39 -- common/autotest_common.sh@940 -- # kill -0 1207087 00:18:41.341 19:27:39 -- common/autotest_common.sh@941 -- # uname 00:18:41.341 19:27:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:41.341 19:27:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1207087 00:18:41.341 19:27:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:41.341 19:27:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:41.341 19:27:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1207087' 00:18:41.341 killing process with pid 1207087 00:18:41.341 19:27:39 -- common/autotest_common.sh@955 -- # kill 1207087 00:18:41.341 19:27:39 -- common/autotest_common.sh@960 -- # wait 1207087 00:18:41.341 19:27:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:41.341 19:27:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:41.341 19:27:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:41.341 19:27:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.341 19:27:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:41.341 19:27:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.341 19:27:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.341 19:27:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.288 19:27:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:43.288 00:18:43.288 real 0m28.502s 00:18:43.288 user 0m41.929s 00:18:43.288 sys 0m8.181s 00:18:43.288 19:27:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:43.288 19:27:41 -- common/autotest_common.sh@10 -- # set +x 00:18:43.288 ************************************ 00:18:43.288 END TEST nvmf_zcopy 00:18:43.288 ************************************ 00:18:43.288 19:27:41 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:43.288 19:27:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:43.288 19:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:43.288 19:27:41 -- common/autotest_common.sh@10 -- # set +x 00:18:43.288 ************************************ 00:18:43.288 START TEST nvmf_nmic 00:18:43.288 ************************************ 00:18:43.288 19:27:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:43.552 * Looking for test storage... 00:18:43.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.552 19:27:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:43.552 19:27:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:43.552 19:27:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:43.552 19:27:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:43.552 19:27:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:43.552 19:27:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:43.552 19:27:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:43.552 19:27:41 -- scripts/common.sh@335 -- # IFS=.-: 00:18:43.552 19:27:41 -- scripts/common.sh@335 -- # read -ra ver1 00:18:43.552 19:27:41 -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.552 19:27:41 -- scripts/common.sh@336 -- # read -ra ver2 00:18:43.552 19:27:41 -- scripts/common.sh@337 -- # local 'op=<' 00:18:43.552 19:27:41 -- scripts/common.sh@339 -- # ver1_l=2 00:18:43.552 19:27:41 -- scripts/common.sh@340 -- # ver2_l=1 00:18:43.552 19:27:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:43.552 19:27:41 -- scripts/common.sh@343 -- # case "$op" in 00:18:43.552 19:27:41 -- scripts/common.sh@344 -- # : 1 00:18:43.552 19:27:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:43.552 19:27:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.552 19:27:41 -- scripts/common.sh@364 -- # decimal 1 00:18:43.552 19:27:41 -- scripts/common.sh@352 -- # local d=1 00:18:43.552 19:27:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.552 19:27:41 -- scripts/common.sh@354 -- # echo 1 00:18:43.552 19:27:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:43.552 19:27:41 -- scripts/common.sh@365 -- # decimal 2 00:18:43.552 19:27:41 -- scripts/common.sh@352 -- # local d=2 00:18:43.552 19:27:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.552 19:27:41 -- scripts/common.sh@354 -- # echo 2 00:18:43.552 19:27:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:43.552 19:27:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:43.552 19:27:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:43.552 19:27:41 -- scripts/common.sh@367 -- # return 0 00:18:43.552 19:27:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.552 19:27:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:43.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.552 --rc genhtml_branch_coverage=1 00:18:43.552 --rc genhtml_function_coverage=1 00:18:43.552 --rc genhtml_legend=1 00:18:43.552 --rc geninfo_all_blocks=1 00:18:43.552 --rc geninfo_unexecuted_blocks=1 00:18:43.552 00:18:43.552 ' 00:18:43.552 19:27:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:43.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.552 --rc genhtml_branch_coverage=1 00:18:43.552 --rc genhtml_function_coverage=1 00:18:43.552 --rc genhtml_legend=1 00:18:43.552 --rc geninfo_all_blocks=1 00:18:43.552 --rc geninfo_unexecuted_blocks=1 00:18:43.552 00:18:43.552 ' 00:18:43.552 19:27:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:43.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.552 --rc genhtml_branch_coverage=1 00:18:43.552 --rc genhtml_function_coverage=1 00:18:43.552 --rc genhtml_legend=1 00:18:43.552 --rc geninfo_all_blocks=1 00:18:43.552 --rc geninfo_unexecuted_blocks=1 00:18:43.552 00:18:43.552 ' 00:18:43.552 19:27:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:43.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.552 --rc genhtml_branch_coverage=1 00:18:43.552 --rc genhtml_function_coverage=1 00:18:43.552 --rc genhtml_legend=1 00:18:43.552 --rc geninfo_all_blocks=1 00:18:43.552 --rc geninfo_unexecuted_blocks=1 00:18:43.552 00:18:43.552 ' 00:18:43.552 19:27:41 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.552 19:27:41 -- nvmf/common.sh@7 -- # uname -s 00:18:43.552 19:27:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.552 19:27:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.552 19:27:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.552 19:27:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.552 19:27:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.552 19:27:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.552 19:27:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.552 19:27:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.552 19:27:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.552 19:27:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.552 19:27:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.552 19:27:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.552 19:27:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.552 19:27:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.552 19:27:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.552 19:27:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.552 19:27:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.552 19:27:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.552 19:27:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.552 19:27:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.552 19:27:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.552 19:27:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.552 19:27:41 -- paths/export.sh@5 -- # export PATH 00:18:43.552 19:27:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.552 19:27:41 -- nvmf/common.sh@46 -- # : 0 00:18:43.552 19:27:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:43.552 19:27:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:43.553 19:27:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:43.553 19:27:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.553 19:27:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.553 19:27:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:43.553 19:27:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:43.553 19:27:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:43.553 19:27:41 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.553 19:27:41 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.553 19:27:41 -- target/nmic.sh@14 -- # nvmftestinit 00:18:43.553 19:27:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:43.553 19:27:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.553 19:27:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:43.553 19:27:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:43.553 19:27:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:43.553 19:27:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.553 19:27:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.553 19:27:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.553 19:27:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:43.553 19:27:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:43.553 19:27:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:43.553 19:27:41 -- common/autotest_common.sh@10 -- # set +x 00:18:45.456 19:27:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:45.456 19:27:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:45.456 19:27:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:45.456 19:27:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:45.456 19:27:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:45.456 19:27:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:45.456 19:27:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:45.456 19:27:43 -- nvmf/common.sh@294 -- # net_devs=() 00:18:45.456 19:27:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:45.456 19:27:43 -- nvmf/common.sh@295 -- # e810=() 00:18:45.456 19:27:43 -- nvmf/common.sh@295 -- # local -ga e810 00:18:45.715 19:27:43 -- nvmf/common.sh@296 -- # x722=() 00:18:45.715 19:27:43 -- nvmf/common.sh@296 -- # local -ga x722 00:18:45.715 19:27:43 -- nvmf/common.sh@297 -- # mlx=() 00:18:45.715 19:27:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:45.715 19:27:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.715 19:27:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:45.715 19:27:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:45.715 19:27:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:45.715 19:27:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:45.715 19:27:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:45.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:45.715 19:27:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:45.715 19:27:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:45.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:45.715 19:27:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:45.715 19:27:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:45.715 19:27:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.715 19:27:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:45.715 19:27:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.715 19:27:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:45.715 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:45.715 19:27:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.715 19:27:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:45.715 19:27:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.715 19:27:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:45.715 19:27:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.715 19:27:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:45.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:45.715 19:27:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.715 19:27:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:45.715 19:27:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:45.715 19:27:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:45.715 19:27:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.715 19:27:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.715 19:27:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.715 19:27:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:45.715 19:27:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.715 19:27:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.715 19:27:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:45.715 19:27:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.715 19:27:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.715 19:27:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:45.715 19:27:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:45.715 19:27:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.715 19:27:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.715 19:27:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.715 19:27:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.715 19:27:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:45.715 19:27:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.715 19:27:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.715 19:27:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.715 19:27:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:45.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:18:45.715 00:18:45.715 --- 10.0.0.2 ping statistics --- 00:18:45.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.715 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:45.715 19:27:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:18:45.715 00:18:45.715 --- 10.0.0.1 ping statistics --- 00:18:45.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.715 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:18:45.715 19:27:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.715 19:27:43 -- nvmf/common.sh@410 -- # return 0 00:18:45.715 19:27:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:45.715 19:27:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.715 19:27:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:45.715 19:27:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.715 19:27:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:45.715 19:27:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:45.715 19:27:43 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:45.716 19:27:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:45.716 19:27:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:45.716 19:27:43 -- common/autotest_common.sh@10 -- # set +x 00:18:45.716 19:27:43 -- nvmf/common.sh@469 -- # nvmfpid=1211911 00:18:45.716 19:27:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:45.716 19:27:43 -- nvmf/common.sh@470 -- # waitforlisten 1211911 00:18:45.716 19:27:43 -- common/autotest_common.sh@829 -- # '[' -z 1211911 ']' 00:18:45.716 19:27:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.716 19:27:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.716 19:27:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.716 19:27:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.716 19:27:43 -- common/autotest_common.sh@10 -- # set +x 00:18:45.716 [2024-11-17 19:27:43.922452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:45.716 [2024-11-17 19:27:43.922534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.716 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.974 [2024-11-17 19:27:43.991165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.974 [2024-11-17 19:27:44.085708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:45.974 [2024-11-17 19:27:44.085884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.974 [2024-11-17 19:27:44.085904] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.974 [2024-11-17 19:27:44.085917] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.974 [2024-11-17 19:27:44.085998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.974 [2024-11-17 19:27:44.086054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.974 [2024-11-17 19:27:44.086169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.974 [2024-11-17 19:27:44.086173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.912 19:27:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.912 19:27:44 -- common/autotest_common.sh@862 -- # return 0 00:18:46.912 19:27:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:46.912 19:27:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 19:27:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.912 19:27:44 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.912 19:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 [2024-11-17 19:27:44.915359] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.912 19:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.912 19:27:44 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:46.912 19:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 Malloc0 00:18:46.912 19:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.912 19:27:44 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:46.912 19:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 19:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.912 19:27:44 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.912 19:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 19:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.912 19:27:44 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.912 19:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 [2024-11-17 19:27:44.966537] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.912 19:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.912 19:27:44 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:46.912 test case1: single bdev can't be used in multiple subsystems 00:18:46.912 19:27:44 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:46.912 19:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 19:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.912 19:27:44 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:46.912 19:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 19:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.912 19:27:44 -- target/nmic.sh@28 -- # nmic_status=0 00:18:46.912 19:27:44 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:46.912 19:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 [2024-11-17 19:27:44.990387] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:46.912 [2024-11-17 19:27:44.990416] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:46.912 [2024-11-17 19:27:44.990445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.912 request: 00:18:46.912 { 00:18:46.912 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:46.912 "namespace": { 00:18:46.912 "bdev_name": "Malloc0" 00:18:46.912 }, 00:18:46.912 "method": "nvmf_subsystem_add_ns", 00:18:46.912 "req_id": 1 00:18:46.912 } 00:18:46.912 Got JSON-RPC error response 00:18:46.912 response: 00:18:46.912 { 00:18:46.912 "code": -32602, 00:18:46.912 "message": "Invalid parameters" 00:18:46.912 } 00:18:46.912 19:27:44 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:46.912 19:27:44 -- target/nmic.sh@29 -- # nmic_status=1 00:18:46.912 19:27:44 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:46.912 19:27:44 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:46.912 Adding namespace failed - expected result. 00:18:46.912 19:27:44 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:46.912 test case2: host connect to nvmf target in multiple paths 00:18:46.912 19:27:44 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:46.912 19:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.912 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:46.912 [2024-11-17 19:27:44.998534] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:46.912 19:27:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.912 19:27:45 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.480 19:27:45 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:48.418 19:27:46 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:48.418 19:27:46 -- common/autotest_common.sh@1187 -- # local i=0 00:18:48.418 19:27:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.418 19:27:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:48.418 19:27:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:50.323 19:27:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:50.323 19:27:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:50.323 19:27:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.323 19:27:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:50.323 19:27:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.323 19:27:48 -- common/autotest_common.sh@1197 -- # return 0 00:18:50.323 19:27:48 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:50.323 [global] 00:18:50.323 thread=1 00:18:50.323 invalidate=1 00:18:50.323 rw=write 00:18:50.323 time_based=1 00:18:50.323 runtime=1 00:18:50.323 ioengine=libaio 00:18:50.323 direct=1 00:18:50.323 bs=4096 00:18:50.323 iodepth=1 00:18:50.323 norandommap=0 00:18:50.323 numjobs=1 00:18:50.323 00:18:50.323 verify_dump=1 00:18:50.323 verify_backlog=512 00:18:50.323 verify_state_save=0 00:18:50.323 do_verify=1 00:18:50.323 verify=crc32c-intel 00:18:50.323 [job0] 00:18:50.323 filename=/dev/nvme0n1 00:18:50.323 Could not set queue depth (nvme0n1) 00:18:50.581 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.581 fio-3.35 00:18:50.581 Starting 1 thread 00:18:51.517 00:18:51.517 job0: (groupid=0, jobs=1): err= 0: pid=1212575: Sun Nov 17 19:27:49 2024 00:18:51.517 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:18:51.517 slat (nsec): min=9744, max=33115, avg=24713.62, stdev=9141.89 00:18:51.517 clat (usec): min=40937, max=42015, avg=41669.56, stdev=455.66 00:18:51.517 lat (usec): min=40969, max=42029, avg=41694.27, stdev=452.48 00:18:51.517 clat percentiles (usec): 00:18:51.517 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:51.517 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:51.517 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:51.517 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:51.517 | 99.99th=[42206] 00:18:51.517 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:18:51.517 slat (nsec): min=7823, max=77526, avg=18778.97, stdev=9134.53 00:18:51.517 clat (usec): min=133, max=339, avg=231.13, stdev=38.17 00:18:51.517 lat (usec): min=143, max=376, avg=249.91, stdev=35.47 00:18:51.517 clat percentiles (usec): 00:18:51.517 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 182], 20.00th=[ 194], 00:18:51.517 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:18:51.517 | 70.00th=[ 247], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 285], 00:18:51.517 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 338], 99.95th=[ 338], 00:18:51.517 | 99.99th=[ 338] 00:18:51.517 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.517 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.517 lat (usec) : 250=69.42%, 500=26.64% 00:18:51.517 lat (msec) : 50=3.94% 00:18:51.517 cpu : usr=0.80%, sys=0.60%, ctx=533, majf=0, minf=1 00:18:51.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.517 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.517 00:18:51.517 Run status group 0 (all jobs): 00:18:51.517 READ: bw=83.5KiB/s (85.5kB/s), 83.5KiB/s-83.5KiB/s (85.5kB/s-85.5kB/s), io=84.0KiB (86.0kB), run=1006-1006msec 00:18:51.517 WRITE: bw=2036KiB/s (2085kB/s), 2036KiB/s-2036KiB/s (2085kB/s-2085kB/s), io=2048KiB (2097kB), run=1006-1006msec 00:18:51.517 00:18:51.517 Disk stats (read/write): 00:18:51.517 nvme0n1: ios=68/512, merge=0/0, ticks=771/112, in_queue=883, util=91.28% 00:18:51.517 19:27:49 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:51.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:51.776 19:27:49 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:51.776 19:27:49 -- common/autotest_common.sh@1208 -- # local i=0 00:18:51.776 19:27:49 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:51.776 19:27:49 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.776 19:27:49 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:51.776 19:27:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.776 19:27:49 -- common/autotest_common.sh@1220 -- # return 0 00:18:51.776 19:27:49 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:51.776 19:27:49 -- target/nmic.sh@53 -- # nvmftestfini 00:18:51.776 19:27:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:51.776 19:27:49 -- nvmf/common.sh@116 -- # sync 00:18:51.776 19:27:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:51.776 19:27:49 -- nvmf/common.sh@119 -- # set +e 00:18:51.776 19:27:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:51.776 19:27:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:51.776 rmmod nvme_tcp 00:18:51.776 rmmod nvme_fabrics 00:18:51.776 rmmod nvme_keyring 00:18:51.776 19:27:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:51.776 19:27:49 -- nvmf/common.sh@123 -- # set -e 00:18:51.776 19:27:49 -- nvmf/common.sh@124 -- # return 0 00:18:51.776 19:27:49 -- nvmf/common.sh@477 -- # '[' -n 1211911 ']' 00:18:51.776 19:27:49 -- nvmf/common.sh@478 -- # killprocess 1211911 00:18:51.776 19:27:49 -- common/autotest_common.sh@936 -- # '[' -z 1211911 ']' 00:18:51.776 19:27:49 -- common/autotest_common.sh@940 -- # kill -0 1211911 00:18:51.776 19:27:49 -- common/autotest_common.sh@941 -- # uname 00:18:51.776 19:27:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.776 19:27:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1211911 00:18:51.776 19:27:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:51.776 19:27:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:51.776 19:27:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1211911' 00:18:51.776 killing process with pid 1211911 00:18:51.776 19:27:49 -- common/autotest_common.sh@955 -- # kill 1211911 00:18:51.776 19:27:49 -- common/autotest_common.sh@960 -- # wait 1211911 00:18:52.036 19:27:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:52.036 19:27:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:52.036 19:27:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:52.036 19:27:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.036 19:27:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:52.036 19:27:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.036 19:27:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.036 19:27:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.576 19:27:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:54.576 00:18:54.576 real 0m10.771s 00:18:54.576 user 0m25.784s 00:18:54.576 sys 0m2.341s 00:18:54.576 19:27:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:54.576 19:27:52 -- common/autotest_common.sh@10 -- # set +x 00:18:54.576 ************************************ 00:18:54.576 END TEST nvmf_nmic 00:18:54.576 ************************************ 00:18:54.576 19:27:52 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:54.576 19:27:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:54.576 19:27:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:54.576 19:27:52 -- common/autotest_common.sh@10 -- # set +x 00:18:54.576 ************************************ 00:18:54.576 START TEST nvmf_fio_target 00:18:54.576 ************************************ 00:18:54.576 19:27:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:54.576 * Looking for test storage... 00:18:54.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:54.576 19:27:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:54.576 19:27:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:54.576 19:27:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:54.576 19:27:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:54.576 19:27:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:54.576 19:27:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:54.576 19:27:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:54.576 19:27:52 -- scripts/common.sh@335 -- # IFS=.-: 00:18:54.576 19:27:52 -- scripts/common.sh@335 -- # read -ra ver1 00:18:54.576 19:27:52 -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.576 19:27:52 -- scripts/common.sh@336 -- # read -ra ver2 00:18:54.576 19:27:52 -- scripts/common.sh@337 -- # local 'op=<' 00:18:54.576 19:27:52 -- scripts/common.sh@339 -- # ver1_l=2 00:18:54.576 19:27:52 -- scripts/common.sh@340 -- # ver2_l=1 00:18:54.576 19:27:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:54.576 19:27:52 -- scripts/common.sh@343 -- # case "$op" in 00:18:54.576 19:27:52 -- scripts/common.sh@344 -- # : 1 00:18:54.576 19:27:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:54.576 19:27:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.576 19:27:52 -- scripts/common.sh@364 -- # decimal 1 00:18:54.576 19:27:52 -- scripts/common.sh@352 -- # local d=1 00:18:54.576 19:27:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.576 19:27:52 -- scripts/common.sh@354 -- # echo 1 00:18:54.576 19:27:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:54.576 19:27:52 -- scripts/common.sh@365 -- # decimal 2 00:18:54.576 19:27:52 -- scripts/common.sh@352 -- # local d=2 00:18:54.576 19:27:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.576 19:27:52 -- scripts/common.sh@354 -- # echo 2 00:18:54.576 19:27:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:54.576 19:27:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:54.576 19:27:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:54.576 19:27:52 -- scripts/common.sh@367 -- # return 0 00:18:54.576 19:27:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.576 19:27:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.576 --rc genhtml_branch_coverage=1 00:18:54.576 --rc genhtml_function_coverage=1 00:18:54.576 --rc genhtml_legend=1 00:18:54.576 --rc geninfo_all_blocks=1 00:18:54.576 --rc geninfo_unexecuted_blocks=1 00:18:54.576 00:18:54.576 ' 00:18:54.576 19:27:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.576 --rc genhtml_branch_coverage=1 00:18:54.576 --rc genhtml_function_coverage=1 00:18:54.576 --rc genhtml_legend=1 00:18:54.576 --rc geninfo_all_blocks=1 00:18:54.576 --rc geninfo_unexecuted_blocks=1 00:18:54.576 00:18:54.576 ' 00:18:54.576 19:27:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.576 --rc genhtml_branch_coverage=1 00:18:54.576 --rc genhtml_function_coverage=1 00:18:54.576 --rc genhtml_legend=1 00:18:54.576 --rc geninfo_all_blocks=1 00:18:54.576 --rc geninfo_unexecuted_blocks=1 00:18:54.576 00:18:54.576 ' 00:18:54.576 19:27:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.576 --rc genhtml_branch_coverage=1 00:18:54.576 --rc genhtml_function_coverage=1 00:18:54.576 --rc genhtml_legend=1 00:18:54.576 --rc geninfo_all_blocks=1 00:18:54.576 --rc geninfo_unexecuted_blocks=1 00:18:54.576 00:18:54.576 ' 00:18:54.576 19:27:52 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.576 19:27:52 -- nvmf/common.sh@7 -- # uname -s 00:18:54.577 19:27:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.577 19:27:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.577 19:27:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.577 19:27:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.577 19:27:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.577 19:27:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.577 19:27:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.577 19:27:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.577 19:27:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.577 19:27:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.577 19:27:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.577 19:27:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.577 19:27:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.577 19:27:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.577 19:27:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.577 19:27:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.577 19:27:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.577 19:27:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.577 19:27:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.577 19:27:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.577 19:27:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.577 19:27:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.577 19:27:52 -- paths/export.sh@5 -- # export PATH 00:18:54.577 19:27:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.577 19:27:52 -- nvmf/common.sh@46 -- # : 0 00:18:54.577 19:27:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:54.577 19:27:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:54.577 19:27:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:54.577 19:27:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.577 19:27:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.577 19:27:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:54.577 19:27:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:54.577 19:27:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:54.577 19:27:52 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:54.577 19:27:52 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:54.577 19:27:52 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.577 19:27:52 -- target/fio.sh@16 -- # nvmftestinit 00:18:54.577 19:27:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:54.577 19:27:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.577 19:27:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:54.577 19:27:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:54.577 19:27:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:54.577 19:27:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.577 19:27:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.577 19:27:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.577 19:27:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:54.577 19:27:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:54.577 19:27:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:54.577 19:27:52 -- common/autotest_common.sh@10 -- # set +x 00:18:56.483 19:27:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:56.483 19:27:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:56.483 19:27:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:56.483 19:27:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:56.483 19:27:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:56.483 19:27:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:56.483 19:27:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:56.483 19:27:54 -- nvmf/common.sh@294 -- # net_devs=() 00:18:56.483 19:27:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:56.483 19:27:54 -- nvmf/common.sh@295 -- # e810=() 00:18:56.483 19:27:54 -- nvmf/common.sh@295 -- # local -ga e810 00:18:56.483 19:27:54 -- nvmf/common.sh@296 -- # x722=() 00:18:56.483 19:27:54 -- nvmf/common.sh@296 -- # local -ga x722 00:18:56.483 19:27:54 -- nvmf/common.sh@297 -- # mlx=() 00:18:56.483 19:27:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:56.483 19:27:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.483 19:27:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:56.483 19:27:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:56.483 19:27:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:56.483 19:27:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:56.483 19:27:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:56.483 19:27:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:56.483 19:27:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:56.483 19:27:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:56.483 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:56.483 19:27:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:56.483 19:27:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:56.483 19:27:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:56.484 19:27:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:56.484 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:56.484 19:27:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:56.484 19:27:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:56.484 19:27:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.484 19:27:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:56.484 19:27:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.484 19:27:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:56.484 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:56.484 19:27:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.484 19:27:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:56.484 19:27:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.484 19:27:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:56.484 19:27:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.484 19:27:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:56.484 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:56.484 19:27:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.484 19:27:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:56.484 19:27:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:56.484 19:27:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:56.484 19:27:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.484 19:27:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.484 19:27:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.484 19:27:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:56.484 19:27:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.484 19:27:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.484 19:27:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:56.484 19:27:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.484 19:27:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.484 19:27:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:56.484 19:27:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:56.484 19:27:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.484 19:27:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.484 19:27:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.484 19:27:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.484 19:27:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:56.484 19:27:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.484 19:27:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.484 19:27:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.484 19:27:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:56.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:18:56.484 00:18:56.484 --- 10.0.0.2 ping statistics --- 00:18:56.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.484 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:56.484 19:27:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:18:56.484 00:18:56.484 --- 10.0.0.1 ping statistics --- 00:18:56.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.484 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:56.484 19:27:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.484 19:27:54 -- nvmf/common.sh@410 -- # return 0 00:18:56.484 19:27:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:56.484 19:27:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.484 19:27:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:56.484 19:27:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.484 19:27:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:56.484 19:27:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:56.484 19:27:54 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:56.484 19:27:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:56.484 19:27:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:56.484 19:27:54 -- common/autotest_common.sh@10 -- # set +x 00:18:56.484 19:27:54 -- nvmf/common.sh@469 -- # nvmfpid=1214747 00:18:56.484 19:27:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:56.484 19:27:54 -- nvmf/common.sh@470 -- # waitforlisten 1214747 00:18:56.484 19:27:54 -- common/autotest_common.sh@829 -- # '[' -z 1214747 ']' 00:18:56.484 19:27:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.484 19:27:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.484 19:27:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.484 19:27:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.484 19:27:54 -- common/autotest_common.sh@10 -- # set +x 00:18:56.484 [2024-11-17 19:27:54.593922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:56.484 [2024-11-17 19:27:54.594016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.484 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.484 [2024-11-17 19:27:54.665639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.744 [2024-11-17 19:27:54.759068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:56.744 [2024-11-17 19:27:54.759232] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.744 [2024-11-17 19:27:54.759252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.744 [2024-11-17 19:27:54.759266] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.744 [2024-11-17 19:27:54.759325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.744 [2024-11-17 19:27:54.759393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.744 [2024-11-17 19:27:54.759448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.744 [2024-11-17 19:27:54.759452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.311 19:27:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.311 19:27:55 -- common/autotest_common.sh@862 -- # return 0 00:18:57.311 19:27:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:57.311 19:27:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.311 19:27:55 -- common/autotest_common.sh@10 -- # set +x 00:18:57.570 19:27:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.570 19:27:55 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:57.828 [2024-11-17 19:27:55.839181] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.828 19:27:55 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.086 19:27:56 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:58.086 19:27:56 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.344 19:27:56 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:58.344 19:27:56 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.602 19:27:56 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:58.602 19:27:56 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.861 19:27:56 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:58.861 19:27:56 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:59.120 19:27:57 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.377 19:27:57 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:59.377 19:27:57 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.634 19:27:57 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:59.634 19:27:57 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.893 19:27:58 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:59.893 19:27:58 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:00.151 19:27:58 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:00.409 19:27:58 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:00.409 19:27:58 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:00.667 19:27:58 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:00.667 19:27:58 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:00.925 19:27:59 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.185 [2024-11-17 19:27:59.275779] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.185 19:27:59 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:01.443 19:27:59 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:01.703 19:27:59 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.272 19:28:00 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:02.272 19:28:00 -- common/autotest_common.sh@1187 -- # local i=0 00:19:02.272 19:28:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.272 19:28:00 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:19:02.272 19:28:00 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:19:02.272 19:28:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:04.810 19:28:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:04.810 19:28:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:04.810 19:28:02 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.810 19:28:02 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:19:04.810 19:28:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.810 19:28:02 -- common/autotest_common.sh@1197 -- # return 0 00:19:04.810 19:28:02 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:04.810 [global] 00:19:04.810 thread=1 00:19:04.810 invalidate=1 00:19:04.810 rw=write 00:19:04.810 time_based=1 00:19:04.810 runtime=1 00:19:04.810 ioengine=libaio 00:19:04.810 direct=1 00:19:04.810 bs=4096 00:19:04.810 iodepth=1 00:19:04.810 norandommap=0 00:19:04.810 numjobs=1 00:19:04.810 00:19:04.810 verify_dump=1 00:19:04.810 verify_backlog=512 00:19:04.810 verify_state_save=0 00:19:04.810 do_verify=1 00:19:04.810 verify=crc32c-intel 00:19:04.810 [job0] 00:19:04.810 filename=/dev/nvme0n1 00:19:04.810 [job1] 00:19:04.810 filename=/dev/nvme0n2 00:19:04.810 [job2] 00:19:04.810 filename=/dev/nvme0n3 00:19:04.810 [job3] 00:19:04.810 filename=/dev/nvme0n4 00:19:04.811 Could not set queue depth (nvme0n1) 00:19:04.811 Could not set queue depth (nvme0n2) 00:19:04.811 Could not set queue depth (nvme0n3) 00:19:04.811 Could not set queue depth (nvme0n4) 00:19:04.811 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.811 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.811 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.811 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.811 fio-3.35 00:19:04.811 Starting 4 threads 00:19:05.748 00:19:05.748 job0: (groupid=0, jobs=1): err= 0: pid=1215893: Sun Nov 17 19:28:03 2024 00:19:05.748 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:19:05.748 slat (nsec): min=12602, max=39105, avg=22331.05, stdev=9893.69 00:19:05.748 clat (usec): min=40902, max=42062, avg=41231.93, stdev=433.18 00:19:05.748 lat (usec): min=40918, max=42079, avg=41254.26, stdev=437.78 00:19:05.748 clat percentiles (usec): 00:19:05.748 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:05.748 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:05.748 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:19:05.748 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:05.748 | 99.99th=[42206] 00:19:05.748 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:19:05.748 slat (nsec): min=8649, max=57075, avg=20022.53, stdev=7843.89 00:19:05.748 clat (usec): min=190, max=433, avg=239.70, stdev=32.45 00:19:05.748 lat (usec): min=208, max=466, avg=259.72, stdev=32.98 00:19:05.748 clat percentiles (usec): 00:19:05.748 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 223], 00:19:05.748 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:19:05.748 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 302], 00:19:05.748 | 99.00th=[ 396], 99.50th=[ 420], 99.90th=[ 433], 99.95th=[ 433], 00:19:05.748 | 99.99th=[ 433] 00:19:05.748 bw ( KiB/s): min= 4096, max= 4096, per=32.78%, avg=4096.00, stdev= 0.00, samples=1 00:19:05.748 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:05.748 lat (usec) : 250=76.17%, 500=19.89% 00:19:05.748 lat (msec) : 50=3.94% 00:19:05.748 cpu : usr=0.80%, sys=1.30%, ctx=533, majf=0, minf=2 00:19:05.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.748 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.748 job1: (groupid=0, jobs=1): err= 0: pid=1215894: Sun Nov 17 19:28:03 2024 00:19:05.748 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:19:05.748 slat (nsec): min=11688, max=35541, avg=21910.59, stdev=9619.74 00:19:05.748 clat (usec): min=40729, max=42100, avg=41774.35, stdev=421.29 00:19:05.748 lat (usec): min=40750, max=42114, avg=41796.26, stdev=421.57 00:19:05.748 clat percentiles (usec): 00:19:05.748 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:19:05.748 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:05.748 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:05.748 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:05.748 | 99.99th=[42206] 00:19:05.748 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:19:05.748 slat (nsec): min=8257, max=47012, avg=17221.87, stdev=6899.37 00:19:05.748 clat (usec): min=148, max=455, avg=196.90, stdev=27.35 00:19:05.748 lat (usec): min=158, max=476, avg=214.12, stdev=27.91 00:19:05.748 clat percentiles (usec): 00:19:05.748 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:19:05.748 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:19:05.748 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 233], 95.00th=[ 247], 00:19:05.748 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 457], 99.95th=[ 457], 00:19:05.748 | 99.99th=[ 457] 00:19:05.748 bw ( KiB/s): min= 4096, max= 4096, per=32.78%, avg=4096.00, stdev= 0.00, samples=1 00:19:05.748 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:05.748 lat (usec) : 250=92.13%, 500=3.75% 00:19:05.748 lat (msec) : 50=4.12% 00:19:05.748 cpu : usr=0.48%, sys=1.36%, ctx=534, majf=0, minf=1 00:19:05.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.748 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.748 job2: (groupid=0, jobs=1): err= 0: pid=1215895: Sun Nov 17 19:28:03 2024 00:19:05.748 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:05.748 slat (nsec): min=5252, max=35915, avg=7179.17, stdev=2996.72 00:19:05.748 clat (usec): min=172, max=42065, avg=731.04, stdev=4622.74 00:19:05.748 lat (usec): min=177, max=42079, avg=738.22, stdev=4625.01 00:19:05.748 clat percentiles (usec): 00:19:05.748 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:19:05.748 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:19:05.748 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 249], 95.00th=[ 269], 00:19:05.748 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:19:05.748 | 99.99th=[42206] 00:19:05.748 write: IOPS=1174, BW=4699KiB/s (4812kB/s)(4704KiB/1001msec); 0 zone resets 00:19:05.748 slat (nsec): min=6670, max=45202, avg=13339.22, stdev=7158.00 00:19:05.748 clat (usec): min=130, max=404, avg=189.07, stdev=47.97 00:19:05.748 lat (usec): min=137, max=442, avg=202.41, stdev=52.58 00:19:05.748 clat percentiles (usec): 00:19:05.748 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:19:05.748 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 165], 60.00th=[ 204], 00:19:05.749 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 269], 00:19:05.749 | 99.00th=[ 326], 99.50th=[ 367], 99.90th=[ 404], 99.95th=[ 404], 00:19:05.749 | 99.99th=[ 404] 00:19:05.749 bw ( KiB/s): min= 4096, max= 4096, per=32.78%, avg=4096.00, stdev= 0.00, samples=1 00:19:05.749 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:05.749 lat (usec) : 250=89.91%, 500=9.45%, 750=0.05% 00:19:05.749 lat (msec) : 50=0.59% 00:19:05.749 cpu : usr=1.50%, sys=3.30%, ctx=2200, majf=0, minf=1 00:19:05.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.749 issued rwts: total=1024,1176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.749 job3: (groupid=0, jobs=1): err= 0: pid=1215896: Sun Nov 17 19:28:03 2024 00:19:05.749 read: IOPS=956, BW=3826KiB/s (3917kB/s)(3860KiB/1009msec) 00:19:05.749 slat (nsec): min=5758, max=52973, avg=14156.82, stdev=7054.40 00:19:05.749 clat (usec): min=183, max=41963, avg=811.73, stdev=4890.11 00:19:05.749 lat (usec): min=190, max=41998, avg=825.89, stdev=4890.97 00:19:05.749 clat percentiles (usec): 00:19:05.749 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:19:05.749 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], 00:19:05.749 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 251], 95.00th=[ 310], 00:19:05.749 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:05.749 | 99.99th=[42206] 00:19:05.749 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:19:05.749 slat (nsec): min=7892, max=52187, avg=17817.86, stdev=5413.48 00:19:05.749 clat (usec): min=145, max=371, avg=180.82, stdev=23.95 00:19:05.749 lat (usec): min=162, max=383, avg=198.64, stdev=23.90 00:19:05.749 clat percentiles (usec): 00:19:05.749 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:19:05.749 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 186], 00:19:05.749 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 223], 00:19:05.749 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 334], 99.95th=[ 371], 00:19:05.749 | 99.99th=[ 371] 00:19:05.749 bw ( KiB/s): min= 8192, max= 8192, per=65.56%, avg=8192.00, stdev= 0.00, samples=1 00:19:05.749 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:05.749 lat (usec) : 250=94.67%, 500=4.63% 00:19:05.749 lat (msec) : 50=0.70% 00:19:05.749 cpu : usr=1.09%, sys=3.77%, ctx=1990, majf=0, minf=1 00:19:05.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.749 issued rwts: total=965,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.749 00:19:05.749 Run status group 0 (all jobs): 00:19:05.749 READ: bw=7876KiB/s (8065kB/s), 83.8KiB/s-4092KiB/s (85.8kB/s-4190kB/s), io=8128KiB (8323kB), run=1001-1032msec 00:19:05.749 WRITE: bw=12.2MiB/s (12.8MB/s), 1984KiB/s-4699KiB/s (2032kB/s-4812kB/s), io=12.6MiB (13.2MB), run=1001-1032msec 00:19:05.749 00:19:05.749 Disk stats (read/write): 00:19:05.749 nvme0n1: ios=67/512, merge=0/0, ticks=729/123, in_queue=852, util=86.67% 00:19:05.749 nvme0n2: ios=32/512, merge=0/0, ticks=735/99, in_queue=834, util=87.08% 00:19:05.749 nvme0n3: ios=512/903, merge=0/0, ticks=643/176, in_queue=819, util=89.03% 00:19:05.749 nvme0n4: ios=1009/1024, merge=0/0, ticks=1172/175, in_queue=1347, util=98.42% 00:19:05.749 19:28:03 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:05.749 [global] 00:19:05.749 thread=1 00:19:05.749 invalidate=1 00:19:05.749 rw=randwrite 00:19:05.749 time_based=1 00:19:05.749 runtime=1 00:19:05.749 ioengine=libaio 00:19:05.749 direct=1 00:19:05.749 bs=4096 00:19:05.749 iodepth=1 00:19:05.749 norandommap=0 00:19:05.749 numjobs=1 00:19:05.749 00:19:05.749 verify_dump=1 00:19:05.749 verify_backlog=512 00:19:05.749 verify_state_save=0 00:19:05.749 do_verify=1 00:19:05.749 verify=crc32c-intel 00:19:05.749 [job0] 00:19:05.749 filename=/dev/nvme0n1 00:19:05.749 [job1] 00:19:05.749 filename=/dev/nvme0n2 00:19:05.749 [job2] 00:19:05.749 filename=/dev/nvme0n3 00:19:05.749 [job3] 00:19:05.749 filename=/dev/nvme0n4 00:19:05.749 Could not set queue depth (nvme0n1) 00:19:05.749 Could not set queue depth (nvme0n2) 00:19:05.749 Could not set queue depth (nvme0n3) 00:19:05.749 Could not set queue depth (nvme0n4) 00:19:06.009 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.009 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.009 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.009 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.009 fio-3.35 00:19:06.009 Starting 4 threads 00:19:07.387 00:19:07.387 job0: (groupid=0, jobs=1): err= 0: pid=1216126: Sun Nov 17 19:28:05 2024 00:19:07.387 read: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:19:07.387 slat (nsec): min=4334, max=31494, avg=7891.63, stdev=3041.70 00:19:07.387 clat (usec): min=152, max=602, avg=180.48, stdev=13.71 00:19:07.387 lat (usec): min=158, max=608, avg=188.37, stdev=14.52 00:19:07.387 clat percentiles (usec): 00:19:07.387 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:19:07.387 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:19:07.387 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:19:07.387 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 223], 99.95th=[ 225], 00:19:07.387 | 99.99th=[ 603] 00:19:07.388 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:19:07.388 slat (nsec): min=5916, max=42965, avg=11336.06, stdev=4539.81 00:19:07.388 clat (usec): min=113, max=249, avg=134.75, stdev=11.49 00:19:07.388 lat (usec): min=121, max=271, avg=146.08, stdev=13.34 00:19:07.388 clat percentiles (usec): 00:19:07.388 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 126], 00:19:07.388 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 137], 00:19:07.388 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:19:07.388 | 99.00th=[ 169], 99.50th=[ 172], 99.90th=[ 188], 99.95th=[ 196], 00:19:07.388 | 99.99th=[ 251] 00:19:07.388 bw ( KiB/s): min=12288, max=12288, per=61.08%, avg=12288.00, stdev= 0.00, samples=1 00:19:07.388 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:07.388 lat (usec) : 250=99.98%, 750=0.02% 00:19:07.388 cpu : usr=3.30%, sys=5.70%, ctx=5918, majf=0, minf=1 00:19:07.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.388 issued rwts: total=2843,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.388 job1: (groupid=0, jobs=1): err= 0: pid=1216127: Sun Nov 17 19:28:05 2024 00:19:07.388 read: IOPS=529, BW=2120KiB/s (2170kB/s)(2128KiB/1004msec) 00:19:07.388 slat (nsec): min=5620, max=37285, avg=11826.18, stdev=6131.23 00:19:07.388 clat (usec): min=171, max=41997, avg=1408.55, stdev=6782.21 00:19:07.388 lat (usec): min=180, max=42014, avg=1420.37, stdev=6783.42 00:19:07.388 clat percentiles (usec): 00:19:07.388 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:19:07.388 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:19:07.388 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 277], 95.00th=[ 302], 00:19:07.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:07.388 | 99.99th=[42206] 00:19:07.388 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:19:07.388 slat (nsec): min=7459, max=66645, avg=14950.92, stdev=8607.06 00:19:07.388 clat (usec): min=131, max=804, avg=220.64, stdev=93.77 00:19:07.388 lat (usec): min=139, max=814, avg=235.59, stdev=98.24 00:19:07.388 clat percentiles (usec): 00:19:07.388 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:19:07.388 | 30.00th=[ 157], 40.00th=[ 167], 50.00th=[ 186], 60.00th=[ 219], 00:19:07.388 | 70.00th=[ 231], 80.00th=[ 251], 90.00th=[ 396], 95.00th=[ 445], 00:19:07.388 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 807], 00:19:07.388 | 99.99th=[ 807] 00:19:07.388 bw ( KiB/s): min= 8192, max= 8192, per=40.72%, avg=8192.00, stdev= 0.00, samples=1 00:19:07.388 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:07.388 lat (usec) : 250=80.08%, 500=18.06%, 750=0.64%, 1000=0.13% 00:19:07.388 lat (msec) : 2=0.06%, 20=0.06%, 50=0.96% 00:19:07.388 cpu : usr=1.60%, sys=2.89%, ctx=1556, majf=0, minf=2 00:19:07.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.388 issued rwts: total=532,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.388 job2: (groupid=0, jobs=1): err= 0: pid=1216128: Sun Nov 17 19:28:05 2024 00:19:07.388 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:19:07.388 slat (nsec): min=6983, max=33391, avg=20203.00, stdev=8345.67 00:19:07.388 clat (usec): min=377, max=42011, avg=39562.00, stdev=8767.81 00:19:07.388 lat (usec): min=393, max=42029, avg=39582.20, stdev=8768.71 00:19:07.388 clat percentiles (usec): 00:19:07.388 | 1.00th=[ 379], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:07.388 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:19:07.388 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:07.388 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:07.388 | 99.99th=[42206] 00:19:07.388 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:19:07.388 slat (nsec): min=6237, max=61644, avg=15670.80, stdev=10633.53 00:19:07.388 clat (usec): min=162, max=750, avg=265.88, stdev=74.83 00:19:07.388 lat (usec): min=184, max=758, avg=281.55, stdev=75.47 00:19:07.388 clat percentiles (usec): 00:19:07.388 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 196], 20.00th=[ 212], 00:19:07.388 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 245], 00:19:07.388 | 70.00th=[ 289], 80.00th=[ 343], 90.00th=[ 388], 95.00th=[ 400], 00:19:07.388 | 99.00th=[ 433], 99.50th=[ 453], 99.90th=[ 750], 99.95th=[ 750], 00:19:07.388 | 99.99th=[ 750] 00:19:07.388 bw ( KiB/s): min= 4096, max= 4096, per=20.36%, avg=4096.00, stdev= 0.00, samples=1 00:19:07.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:07.388 lat (usec) : 250=59.93%, 500=35.96%, 1000=0.19% 00:19:07.388 lat (msec) : 50=3.93% 00:19:07.388 cpu : usr=0.39%, sys=0.79%, ctx=534, majf=0, minf=2 00:19:07.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.388 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.388 job3: (groupid=0, jobs=1): err= 0: pid=1216129: Sun Nov 17 19:28:05 2024 00:19:07.388 read: IOPS=23, BW=95.8KiB/s (98.1kB/s)(96.0KiB/1002msec) 00:19:07.388 slat (nsec): min=6248, max=32289, avg=20849.25, stdev=8432.36 00:19:07.388 clat (usec): min=245, max=41984, avg=36059.18, stdev=13816.10 00:19:07.388 lat (usec): min=277, max=41999, avg=36080.03, stdev=13813.83 00:19:07.388 clat percentiles (usec): 00:19:07.388 | 1.00th=[ 247], 5.00th=[ 277], 10.00th=[ 347], 20.00th=[40633], 00:19:07.388 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:07.388 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:19:07.388 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:07.388 | 99.99th=[42206] 00:19:07.388 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:19:07.388 slat (nsec): min=6032, max=68568, avg=15003.72, stdev=10574.24 00:19:07.388 clat (usec): min=146, max=480, avg=245.02, stdev=81.21 00:19:07.388 lat (usec): min=153, max=517, avg=260.03, stdev=84.51 00:19:07.388 clat percentiles (usec): 00:19:07.388 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 178], 00:19:07.388 | 30.00th=[ 184], 40.00th=[ 194], 50.00th=[ 210], 60.00th=[ 239], 00:19:07.388 | 70.00th=[ 281], 80.00th=[ 322], 90.00th=[ 388], 95.00th=[ 400], 00:19:07.388 | 99.00th=[ 453], 99.50th=[ 465], 99.90th=[ 482], 99.95th=[ 482], 00:19:07.388 | 99.99th=[ 482] 00:19:07.388 bw ( KiB/s): min= 4096, max= 4096, per=20.36%, avg=4096.00, stdev= 0.00, samples=1 00:19:07.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:07.388 lat (usec) : 250=61.19%, 500=34.89% 00:19:07.388 lat (msec) : 50=3.92% 00:19:07.388 cpu : usr=0.40%, sys=1.00%, ctx=536, majf=0, minf=2 00:19:07.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.388 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.388 00:19:07.388 Run status group 0 (all jobs): 00:19:07.388 READ: bw=13.1MiB/s (13.8MB/s), 86.4KiB/s-11.1MiB/s (88.5kB/s-11.6MB/s), io=13.4MiB (14.0MB), run=1001-1018msec 00:19:07.388 WRITE: bw=19.6MiB/s (20.6MB/s), 2012KiB/s-12.0MiB/s (2060kB/s-12.6MB/s), io=20.0MiB (21.0MB), run=1001-1018msec 00:19:07.388 00:19:07.388 Disk stats (read/write): 00:19:07.388 nvme0n1: ios=2428/2560, merge=0/0, ticks=912/320, in_queue=1232, util=97.49% 00:19:07.388 nvme0n2: ios=527/1024, merge=0/0, ticks=571/208, in_queue=779, util=86.60% 00:19:07.388 nvme0n3: ios=18/512, merge=0/0, ticks=705/115, in_queue=820, util=88.95% 00:19:07.388 nvme0n4: ios=20/512, merge=0/0, ticks=701/111, in_queue=812, util=89.60% 00:19:07.388 19:28:05 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:07.388 [global] 00:19:07.388 thread=1 00:19:07.388 invalidate=1 00:19:07.388 rw=write 00:19:07.388 time_based=1 00:19:07.388 runtime=1 00:19:07.389 ioengine=libaio 00:19:07.389 direct=1 00:19:07.389 bs=4096 00:19:07.389 iodepth=128 00:19:07.389 norandommap=0 00:19:07.389 numjobs=1 00:19:07.389 00:19:07.389 verify_dump=1 00:19:07.389 verify_backlog=512 00:19:07.389 verify_state_save=0 00:19:07.389 do_verify=1 00:19:07.389 verify=crc32c-intel 00:19:07.389 [job0] 00:19:07.389 filename=/dev/nvme0n1 00:19:07.389 [job1] 00:19:07.389 filename=/dev/nvme0n2 00:19:07.389 [job2] 00:19:07.389 filename=/dev/nvme0n3 00:19:07.389 [job3] 00:19:07.389 filename=/dev/nvme0n4 00:19:07.389 Could not set queue depth (nvme0n1) 00:19:07.389 Could not set queue depth (nvme0n2) 00:19:07.389 Could not set queue depth (nvme0n3) 00:19:07.389 Could not set queue depth (nvme0n4) 00:19:07.389 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.389 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.389 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.389 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.389 fio-3.35 00:19:07.389 Starting 4 threads 00:19:08.764 00:19:08.764 job0: (groupid=0, jobs=1): err= 0: pid=1216364: Sun Nov 17 19:28:06 2024 00:19:08.764 read: IOPS=5590, BW=21.8MiB/s (22.9MB/s)(22.1MiB/1010msec) 00:19:08.764 slat (usec): min=2, max=27334, avg=88.49, stdev=682.99 00:19:08.764 clat (usec): min=3857, max=37393, avg=11694.41, stdev=4637.14 00:19:08.764 lat (usec): min=3868, max=37404, avg=11782.90, stdev=4661.34 00:19:08.764 clat percentiles (usec): 00:19:08.764 | 1.00th=[ 5145], 5.00th=[ 7242], 10.00th=[ 8455], 20.00th=[ 9241], 00:19:08.764 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10552], 60.00th=[11076], 00:19:08.764 | 70.00th=[11863], 80.00th=[13304], 90.00th=[16188], 95.00th=[18744], 00:19:08.764 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:19:08.764 | 99.99th=[37487] 00:19:08.764 write: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec); 0 zone resets 00:19:08.764 slat (usec): min=3, max=7771, avg=70.97, stdev=346.92 00:19:08.764 clat (usec): min=1353, max=21110, avg=10084.73, stdev=2462.90 00:19:08.764 lat (usec): min=1362, max=21118, avg=10155.69, stdev=2487.35 00:19:08.764 clat percentiles (usec): 00:19:08.764 | 1.00th=[ 3130], 5.00th=[ 4948], 10.00th=[ 6390], 20.00th=[ 8848], 00:19:08.764 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:19:08.764 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[12256], 00:19:08.764 | 99.00th=[17433], 99.50th=[19530], 99.90th=[20841], 99.95th=[21103], 00:19:08.764 | 99.99th=[21103] 00:19:08.764 bw ( KiB/s): min=23752, max=24496, per=34.55%, avg=24124.00, stdev=526.09, samples=2 00:19:08.764 iops : min= 5938, max= 6124, avg=6031.00, stdev=131.52, samples=2 00:19:08.765 lat (msec) : 2=0.18%, 4=1.15%, 10=35.04%, 20=62.01%, 50=1.63% 00:19:08.765 cpu : usr=7.33%, sys=13.38%, ctx=646, majf=0, minf=1 00:19:08.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:08.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.765 issued rwts: total=5646,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.765 job1: (groupid=0, jobs=1): err= 0: pid=1216365: Sun Nov 17 19:28:06 2024 00:19:08.765 read: IOPS=5375, BW=21.0MiB/s (22.0MB/s)(21.0MiB/1001msec) 00:19:08.765 slat (usec): min=3, max=3801, avg=81.32, stdev=379.94 00:19:08.765 clat (usec): min=462, max=12763, avg=11049.40, stdev=1090.54 00:19:08.765 lat (usec): min=2713, max=14959, avg=11130.72, stdev=1030.93 00:19:08.765 clat percentiles (usec): 00:19:08.765 | 1.00th=[ 5735], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10814], 00:19:08.765 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11338], 00:19:08.765 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:19:08.765 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12780], 99.95th=[12780], 00:19:08.765 | 99.99th=[12780] 00:19:08.765 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:19:08.765 slat (usec): min=3, max=18496, avg=88.69, stdev=518.50 00:19:08.765 clat (usec): min=8212, max=47847, avg=11345.07, stdev=2812.57 00:19:08.765 lat (usec): min=8230, max=47864, avg=11433.76, stdev=2855.00 00:19:08.765 clat percentiles (usec): 00:19:08.765 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:19:08.765 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11207], 60.00th=[11600], 00:19:08.765 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12387], 95.00th=[12780], 00:19:08.765 | 99.00th=[24773], 99.50th=[32637], 99.90th=[42206], 99.95th=[42206], 00:19:08.765 | 99.99th=[47973] 00:19:08.765 bw ( KiB/s): min=20872, max=20872, per=29.89%, avg=20872.00, stdev= 0.00, samples=1 00:19:08.765 iops : min= 5218, max= 5218, avg=5218.00, stdev= 0.00, samples=1 00:19:08.765 lat (usec) : 500=0.01% 00:19:08.765 lat (msec) : 4=0.29%, 10=19.72%, 20=79.14%, 50=0.84% 00:19:08.765 cpu : usr=8.00%, sys=12.70%, ctx=464, majf=0, minf=1 00:19:08.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:08.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.765 issued rwts: total=5381,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.765 job2: (groupid=0, jobs=1): err= 0: pid=1216368: Sun Nov 17 19:28:06 2024 00:19:08.765 read: IOPS=2076, BW=8307KiB/s (8506kB/s)(8440KiB/1016msec) 00:19:08.765 slat (usec): min=3, max=11978, avg=183.43, stdev=1151.80 00:19:08.765 clat (usec): min=5478, max=98458, avg=17026.19, stdev=11374.04 00:19:08.765 lat (usec): min=5496, max=98472, avg=17209.61, stdev=11566.69 00:19:08.765 clat percentiles (usec): 00:19:08.765 | 1.00th=[ 6325], 5.00th=[ 9634], 10.00th=[11469], 20.00th=[12387], 00:19:08.765 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13829], 00:19:08.765 | 70.00th=[14353], 80.00th=[17695], 90.00th=[26608], 95.00th=[40109], 00:19:08.765 | 99.00th=[69731], 99.50th=[86508], 99.90th=[98042], 99.95th=[98042], 00:19:08.765 | 99.99th=[98042] 00:19:08.765 write: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec); 0 zone resets 00:19:08.765 slat (usec): min=4, max=11732, avg=231.86, stdev=1093.57 00:19:08.765 clat (usec): min=1255, max=138707, avg=36287.76, stdev=30482.80 00:19:08.765 lat (usec): min=1268, max=138725, avg=36519.62, stdev=30655.87 00:19:08.765 clat percentiles (msec): 00:19:08.765 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 13], 00:19:08.765 | 30.00th=[ 16], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 33], 00:19:08.765 | 70.00th=[ 39], 80.00th=[ 57], 90.00th=[ 88], 95.00th=[ 113], 00:19:08.765 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 140], 00:19:08.765 | 99.99th=[ 140] 00:19:08.765 bw ( KiB/s): min= 9000, max=10952, per=14.29%, avg=9976.00, stdev=1380.27, samples=2 00:19:08.765 iops : min= 2250, max= 2738, avg=2494.00, stdev=345.07, samples=2 00:19:08.765 lat (msec) : 2=0.45%, 4=0.43%, 10=4.63%, 20=50.94%, 50=29.94% 00:19:08.765 lat (msec) : 100=9.04%, 250=4.58% 00:19:08.765 cpu : usr=4.14%, sys=4.24%, ctx=279, majf=0, minf=2 00:19:08.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:08.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.765 issued rwts: total=2110,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.765 job3: (groupid=0, jobs=1): err= 0: pid=1216369: Sun Nov 17 19:28:06 2024 00:19:08.765 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:19:08.765 slat (usec): min=2, max=13296, avg=123.16, stdev=784.74 00:19:08.765 clat (usec): min=5511, max=36207, avg=14990.93, stdev=4935.62 00:19:08.765 lat (usec): min=5530, max=36212, avg=15114.09, stdev=4984.24 00:19:08.765 clat percentiles (usec): 00:19:08.765 | 1.00th=[ 7111], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11731], 00:19:08.765 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13435], 60.00th=[14615], 00:19:08.765 | 70.00th=[15795], 80.00th=[18744], 90.00th=[21365], 95.00th=[25297], 00:19:08.765 | 99.00th=[32637], 99.50th=[33424], 99.90th=[36439], 99.95th=[36439], 00:19:08.765 | 99.99th=[36439] 00:19:08.765 write: IOPS=3371, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1008msec); 0 zone resets 00:19:08.765 slat (usec): min=3, max=12368, avg=173.56, stdev=1021.02 00:19:08.765 clat (msec): min=2, max=113, avg=24.00, stdev=21.18 00:19:08.765 lat (msec): min=2, max=113, avg=24.18, stdev=21.32 00:19:08.765 clat percentiles (msec): 00:19:08.765 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:19:08.765 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 22], 00:19:08.765 | 70.00th=[ 24], 80.00th=[ 31], 90.00th=[ 54], 95.00th=[ 65], 00:19:08.765 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 114], 99.95th=[ 114], 00:19:08.765 | 99.99th=[ 114] 00:19:08.765 bw ( KiB/s): min=12864, max=13296, per=18.73%, avg=13080.00, stdev=305.47, samples=2 00:19:08.765 iops : min= 3216, max= 3324, avg=3270.00, stdev=76.37, samples=2 00:19:08.765 lat (msec) : 4=0.11%, 10=7.73%, 20=63.88%, 50=21.62%, 100=5.49% 00:19:08.765 lat (msec) : 250=1.17% 00:19:08.765 cpu : usr=3.38%, sys=8.04%, ctx=285, majf=0, minf=1 00:19:08.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:08.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.765 issued rwts: total=3072,3398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.765 00:19:08.765 Run status group 0 (all jobs): 00:19:08.765 READ: bw=62.3MiB/s (65.3MB/s), 8307KiB/s-21.8MiB/s (8506kB/s-22.9MB/s), io=63.3MiB (66.4MB), run=1001-1016msec 00:19:08.765 WRITE: bw=68.2MiB/s (71.5MB/s), 9.84MiB/s-23.8MiB/s (10.3MB/s-24.9MB/s), io=69.3MiB (72.6MB), run=1001-1016msec 00:19:08.765 00:19:08.765 Disk stats (read/write): 00:19:08.765 nvme0n1: ios=4692/5120, merge=0/0, ticks=49953/49281, in_queue=99234, util=97.70% 00:19:08.765 nvme0n2: ios=4641/4696, merge=0/0, ticks=12340/12664, in_queue=25004, util=97.66% 00:19:08.765 nvme0n3: ios=2048/2335, merge=0/0, ticks=31448/70397, in_queue=101845, util=88.95% 00:19:08.765 nvme0n4: ios=2253/2560, merge=0/0, ticks=30325/67612, in_queue=97937, util=97.48% 00:19:08.765 19:28:06 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:08.765 [global] 00:19:08.765 thread=1 00:19:08.765 invalidate=1 00:19:08.765 rw=randwrite 00:19:08.765 time_based=1 00:19:08.765 runtime=1 00:19:08.765 ioengine=libaio 00:19:08.765 direct=1 00:19:08.765 bs=4096 00:19:08.765 iodepth=128 00:19:08.765 norandommap=0 00:19:08.765 numjobs=1 00:19:08.765 00:19:08.765 verify_dump=1 00:19:08.765 verify_backlog=512 00:19:08.765 verify_state_save=0 00:19:08.765 do_verify=1 00:19:08.765 verify=crc32c-intel 00:19:08.765 [job0] 00:19:08.765 filename=/dev/nvme0n1 00:19:08.765 [job1] 00:19:08.765 filename=/dev/nvme0n2 00:19:08.766 [job2] 00:19:08.766 filename=/dev/nvme0n3 00:19:08.766 [job3] 00:19:08.766 filename=/dev/nvme0n4 00:19:08.766 Could not set queue depth (nvme0n1) 00:19:08.766 Could not set queue depth (nvme0n2) 00:19:08.766 Could not set queue depth (nvme0n3) 00:19:08.766 Could not set queue depth (nvme0n4) 00:19:09.024 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.024 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.024 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.024 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.024 fio-3.35 00:19:09.024 Starting 4 threads 00:19:10.397 00:19:10.397 job0: (groupid=0, jobs=1): err= 0: pid=1216722: Sun Nov 17 19:28:08 2024 00:19:10.397 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:19:10.397 slat (usec): min=2, max=13106, avg=80.81, stdev=419.21 00:19:10.397 clat (usec): min=5429, max=33376, avg=11145.43, stdev=2718.23 00:19:10.397 lat (usec): min=5434, max=33380, avg=11226.24, stdev=2714.74 00:19:10.397 clat percentiles (usec): 00:19:10.397 | 1.00th=[ 7111], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:19:10.397 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:10.397 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[13173], 00:19:10.397 | 99.00th=[25560], 99.50th=[31065], 99.90th=[33424], 99.95th=[33424], 00:19:10.397 | 99.99th=[33424] 00:19:10.397 write: IOPS=5678, BW=22.2MiB/s (23.3MB/s)(22.2MiB/1002msec); 0 zone resets 00:19:10.397 slat (usec): min=4, max=19236, avg=83.81, stdev=468.98 00:19:10.397 clat (usec): min=367, max=35416, avg=10971.45, stdev=2788.15 00:19:10.397 lat (usec): min=4114, max=35427, avg=11055.26, stdev=2794.26 00:19:10.397 clat percentiles (usec): 00:19:10.397 | 1.00th=[ 7111], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:19:10.397 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10945], 60.00th=[11338], 00:19:10.397 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12125], 95.00th=[12911], 00:19:10.397 | 99.00th=[25822], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:19:10.397 | 99.99th=[35390] 00:19:10.397 bw ( KiB/s): min=22368, max=22733, per=33.25%, avg=22550.50, stdev=258.09, samples=2 00:19:10.397 iops : min= 5592, max= 5683, avg=5637.50, stdev=64.35, samples=2 00:19:10.397 lat (usec) : 500=0.01% 00:19:10.397 lat (msec) : 10=27.88%, 20=70.25%, 50=1.85% 00:19:10.397 cpu : usr=9.49%, sys=12.99%, ctx=481, majf=0, minf=1 00:19:10.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:10.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.397 issued rwts: total=5632,5690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.397 job1: (groupid=0, jobs=1): err= 0: pid=1216723: Sun Nov 17 19:28:08 2024 00:19:10.397 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:19:10.397 slat (usec): min=3, max=16967, avg=140.40, stdev=944.39 00:19:10.397 clat (usec): min=7613, max=52763, avg=17171.83, stdev=6801.68 00:19:10.397 lat (usec): min=7628, max=52780, avg=17312.23, stdev=6886.73 00:19:10.397 clat percentiles (usec): 00:19:10.397 | 1.00th=[ 8225], 5.00th=[10814], 10.00th=[11469], 20.00th=[14091], 00:19:10.397 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15401], 60.00th=[15795], 00:19:10.397 | 70.00th=[16450], 80.00th=[17433], 90.00th=[25297], 95.00th=[36439], 00:19:10.397 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[47449], 00:19:10.397 | 99.99th=[52691] 00:19:10.397 write: IOPS=2737, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1005msec); 0 zone resets 00:19:10.397 slat (usec): min=5, max=38813, avg=222.46, stdev=1439.65 00:19:10.397 clat (usec): min=4256, max=96102, avg=30333.97, stdev=17201.09 00:19:10.397 lat (usec): min=4925, max=96120, avg=30556.43, stdev=17305.36 00:19:10.397 clat percentiles (usec): 00:19:10.397 | 1.00th=[ 7308], 5.00th=[11600], 10.00th=[14091], 20.00th=[16057], 00:19:10.397 | 30.00th=[20841], 40.00th=[23725], 50.00th=[25035], 60.00th=[25297], 00:19:10.397 | 70.00th=[33424], 80.00th=[45351], 90.00th=[59507], 95.00th=[64750], 00:19:10.397 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[85459], 00:19:10.397 | 99.99th=[95945] 00:19:10.397 bw ( KiB/s): min= 9464, max=11528, per=15.48%, avg=10496.00, stdev=1459.47, samples=2 00:19:10.397 iops : min= 2366, max= 2882, avg=2624.00, stdev=364.87, samples=2 00:19:10.397 lat (msec) : 10=3.33%, 20=52.72%, 50=34.61%, 100=9.34% 00:19:10.397 cpu : usr=2.09%, sys=3.88%, ctx=295, majf=0, minf=1 00:19:10.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:10.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.397 issued rwts: total=2560,2751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.397 job2: (groupid=0, jobs=1): err= 0: pid=1216724: Sun Nov 17 19:28:08 2024 00:19:10.397 read: IOPS=2953, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1009msec) 00:19:10.397 slat (usec): min=3, max=16212, avg=150.08, stdev=1010.34 00:19:10.397 clat (usec): min=5476, max=58482, avg=17765.96, stdev=7432.68 00:19:10.397 lat (usec): min=5486, max=58489, avg=17916.04, stdev=7512.11 00:19:10.397 clat percentiles (usec): 00:19:10.397 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[12518], 00:19:10.397 | 30.00th=[13042], 40.00th=[13960], 50.00th=[16581], 60.00th=[17957], 00:19:10.397 | 70.00th=[18744], 80.00th=[21365], 90.00th=[26870], 95.00th=[32637], 00:19:10.397 | 99.00th=[45351], 99.50th=[52167], 99.90th=[58459], 99.95th=[58459], 00:19:10.397 | 99.99th=[58459] 00:19:10.397 write: IOPS=3130, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1009msec); 0 zone resets 00:19:10.397 slat (usec): min=4, max=21998, avg=147.34, stdev=743.33 00:19:10.397 clat (usec): min=672, max=65329, avg=23719.97, stdev=12525.96 00:19:10.397 lat (usec): min=695, max=65439, avg=23867.31, stdev=12612.07 00:19:10.397 clat percentiles (usec): 00:19:10.397 | 1.00th=[ 2835], 5.00th=[ 4555], 10.00th=[ 7570], 20.00th=[10159], 00:19:10.397 | 30.00th=[15270], 40.00th=[22414], 50.00th=[24773], 60.00th=[25035], 00:19:10.397 | 70.00th=[28705], 80.00th=[35390], 90.00th=[39584], 95.00th=[42730], 00:19:10.397 | 99.00th=[61080], 99.50th=[62653], 99.90th=[64226], 99.95th=[64226], 00:19:10.397 | 99.99th=[65274] 00:19:10.397 bw ( KiB/s): min=12288, max=12984, per=18.63%, avg=12636.00, stdev=492.15, samples=2 00:19:10.397 iops : min= 3072, max= 3246, avg=3159.00, stdev=123.04, samples=2 00:19:10.397 lat (usec) : 750=0.02%, 1000=0.10% 00:19:10.397 lat (msec) : 4=1.17%, 10=11.04%, 20=42.16%, 50=43.96%, 100=1.55% 00:19:10.397 cpu : usr=4.96%, sys=6.85%, ctx=344, majf=0, minf=1 00:19:10.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:10.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.397 issued rwts: total=2980,3159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.397 job3: (groupid=0, jobs=1): err= 0: pid=1216725: Sun Nov 17 19:28:08 2024 00:19:10.397 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:19:10.397 slat (usec): min=3, max=10726, avg=87.79, stdev=588.24 00:19:10.397 clat (usec): min=3238, max=30644, avg=11667.05, stdev=2653.49 00:19:10.397 lat (usec): min=3599, max=30654, avg=11754.84, stdev=2681.99 00:19:10.397 clat percentiles (usec): 00:19:10.397 | 1.00th=[ 4686], 5.00th=[ 7963], 10.00th=[ 9110], 20.00th=[10552], 00:19:10.397 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:19:10.397 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13960], 95.00th=[15270], 00:19:10.397 | 99.00th=[20579], 99.50th=[22414], 99.90th=[30540], 99.95th=[30540], 00:19:10.397 | 99.99th=[30540] 00:19:10.397 write: IOPS=5455, BW=21.3MiB/s (22.3MB/s)(21.5MiB/1009msec); 0 zone resets 00:19:10.397 slat (usec): min=4, max=34335, avg=84.74, stdev=758.88 00:19:10.397 clat (usec): min=342, max=55582, avg=12347.42, stdev=4826.75 00:19:10.397 lat (usec): min=965, max=55622, avg=12432.16, stdev=4874.76 00:19:10.397 clat percentiles (usec): 00:19:10.397 | 1.00th=[ 4146], 5.00th=[ 6849], 10.00th=[ 7832], 20.00th=[10552], 00:19:10.397 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:19:10.397 | 70.00th=[12256], 80.00th=[12649], 90.00th=[16319], 95.00th=[21103], 00:19:10.397 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:19:10.397 | 99.99th=[55837] 00:19:10.397 bw ( KiB/s): min=20576, max=22440, per=31.72%, avg=21508.00, stdev=1318.05, samples=2 00:19:10.397 iops : min= 5144, max= 5610, avg=5377.00, stdev=329.51, samples=2 00:19:10.397 lat (usec) : 500=0.01%, 750=0.01% 00:19:10.397 lat (msec) : 2=0.15%, 4=0.37%, 10=15.44%, 20=80.50%, 50=3.51% 00:19:10.397 lat (msec) : 100=0.01% 00:19:10.397 cpu : usr=7.74%, sys=11.61%, ctx=354, majf=0, minf=1 00:19:10.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:10.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.398 issued rwts: total=5120,5505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.398 00:19:10.398 Run status group 0 (all jobs): 00:19:10.398 READ: bw=63.1MiB/s (66.1MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=63.6MiB (66.7MB), run=1002-1009msec 00:19:10.398 WRITE: bw=66.2MiB/s (69.4MB/s), 10.7MiB/s-22.2MiB/s (11.2MB/s-23.3MB/s), io=66.8MiB (70.1MB), run=1002-1009msec 00:19:10.398 00:19:10.398 Disk stats (read/write): 00:19:10.398 nvme0n1: ios=4643/4984, merge=0/0, ticks=14180/14167, in_queue=28347, util=97.70% 00:19:10.398 nvme0n2: ios=2018/2048, merge=0/0, ticks=17632/35943, in_queue=53575, util=97.66% 00:19:10.398 nvme0n3: ios=2338/2560, merge=0/0, ticks=40684/64039, in_queue=104723, util=98.85% 00:19:10.398 nvme0n4: ios=4323/4608, merge=0/0, ticks=33532/39925, in_queue=73457, util=89.65% 00:19:10.398 19:28:08 -- target/fio.sh@55 -- # sync 00:19:10.398 19:28:08 -- target/fio.sh@59 -- # fio_pid=1216861 00:19:10.398 19:28:08 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:10.398 19:28:08 -- target/fio.sh@61 -- # sleep 3 00:19:10.398 [global] 00:19:10.398 thread=1 00:19:10.398 invalidate=1 00:19:10.398 rw=read 00:19:10.398 time_based=1 00:19:10.398 runtime=10 00:19:10.398 ioengine=libaio 00:19:10.398 direct=1 00:19:10.398 bs=4096 00:19:10.398 iodepth=1 00:19:10.398 norandommap=1 00:19:10.398 numjobs=1 00:19:10.398 00:19:10.398 [job0] 00:19:10.398 filename=/dev/nvme0n1 00:19:10.398 [job1] 00:19:10.398 filename=/dev/nvme0n2 00:19:10.398 [job2] 00:19:10.398 filename=/dev/nvme0n3 00:19:10.398 [job3] 00:19:10.398 filename=/dev/nvme0n4 00:19:10.398 Could not set queue depth (nvme0n1) 00:19:10.398 Could not set queue depth (nvme0n2) 00:19:10.398 Could not set queue depth (nvme0n3) 00:19:10.398 Could not set queue depth (nvme0n4) 00:19:10.398 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.398 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.398 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.398 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.398 fio-3.35 00:19:10.398 Starting 4 threads 00:19:13.676 19:28:11 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:13.676 19:28:11 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:13.676 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=27226112, buflen=4096 00:19:13.676 fio: pid=1216962, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:13.676 19:28:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.676 19:28:11 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:13.676 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=33079296, buflen=4096 00:19:13.676 fio: pid=1216961, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:13.934 19:28:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.934 19:28:12 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:13.934 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=356352, buflen=4096 00:19:13.934 fio: pid=1216959, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:14.192 19:28:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.192 19:28:12 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:14.192 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=626688, buflen=4096 00:19:14.192 fio: pid=1216960, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:14.192 00:19:14.192 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1216959: Sun Nov 17 19:28:12 2024 00:19:14.192 read: IOPS=25, BW=101KiB/s (104kB/s)(348KiB/3442msec) 00:19:14.192 slat (usec): min=7, max=8932, avg=212.92, stdev=1264.92 00:19:14.192 clat (usec): min=214, max=42023, avg=39074.88, stdev=9653.28 00:19:14.192 lat (usec): min=232, max=50000, avg=39289.85, stdev=9786.49 00:19:14.192 clat percentiles (usec): 00:19:14.192 | 1.00th=[ 215], 5.00th=[ 367], 10.00th=[40633], 20.00th=[41157], 00:19:14.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:19:14.192 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:14.192 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:14.192 | 99.99th=[42206] 00:19:14.192 bw ( KiB/s): min= 96, max= 128, per=0.63%, avg=102.67, stdev=12.82, samples=6 00:19:14.192 iops : min= 24, max= 32, avg=25.67, stdev= 3.20, samples=6 00:19:14.192 lat (usec) : 250=4.55%, 500=1.14% 00:19:14.192 lat (msec) : 50=93.18% 00:19:14.192 cpu : usr=0.00%, sys=0.12%, ctx=90, majf=0, minf=2 00:19:14.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.193 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.193 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.193 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1216960: Sun Nov 17 19:28:12 2024 00:19:14.193 read: IOPS=41, BW=165KiB/s (169kB/s)(612KiB/3698msec) 00:19:14.193 slat (usec): min=4, max=9844, avg=119.30, stdev=919.96 00:19:14.193 clat (usec): min=174, max=44970, avg=23886.12, stdev=20313.85 00:19:14.193 lat (usec): min=180, max=46935, avg=23941.86, stdev=20354.56 00:19:14.193 clat percentiles (usec): 00:19:14.193 | 1.00th=[ 184], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 273], 00:19:14.193 | 30.00th=[ 306], 40.00th=[ 461], 50.00th=[41157], 60.00th=[41157], 00:19:14.193 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:14.193 | 99.00th=[42206], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:19:14.193 | 99.99th=[44827] 00:19:14.193 bw ( KiB/s): min= 99, max= 304, per=1.04%, avg=169.57, stdev=74.48, samples=7 00:19:14.193 iops : min= 24, max= 76, avg=42.29, stdev=18.74, samples=7 00:19:14.193 lat (usec) : 250=6.49%, 500=33.77%, 750=0.65% 00:19:14.193 lat (msec) : 2=0.65%, 4=0.65%, 50=57.14% 00:19:14.193 cpu : usr=0.16%, sys=0.00%, ctx=157, majf=0, minf=1 00:19:14.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.193 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.193 issued rwts: total=154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.193 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1216961: Sun Nov 17 19:28:12 2024 00:19:14.193 read: IOPS=2556, BW=9.98MiB/s (10.5MB/s)(31.5MiB/3160msec) 00:19:14.193 slat (nsec): min=4353, max=59894, avg=9596.41, stdev=4948.34 00:19:14.193 clat (usec): min=164, max=42036, avg=376.83, stdev=2737.18 00:19:14.193 lat (usec): min=170, max=42052, avg=386.43, stdev=2738.03 00:19:14.193 clat percentiles (usec): 00:19:14.193 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:19:14.193 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:19:14.193 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:19:14.193 | 99.00th=[ 289], 99.50th=[ 603], 99.90th=[42206], 99.95th=[42206], 00:19:14.193 | 99.99th=[42206] 00:19:14.193 bw ( KiB/s): min= 104, max=20088, per=60.75%, avg=9833.33, stdev=8485.32, samples=6 00:19:14.193 iops : min= 26, max= 5022, avg=2458.33, stdev=2121.33, samples=6 00:19:14.193 lat (usec) : 250=97.82%, 500=1.66%, 750=0.01%, 1000=0.01% 00:19:14.193 lat (msec) : 2=0.04%, 50=0.45% 00:19:14.193 cpu : usr=1.08%, sys=2.69%, ctx=8081, majf=0, minf=2 00:19:14.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.193 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.193 issued rwts: total=8077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.193 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1216962: Sun Nov 17 19:28:12 2024 00:19:14.193 read: IOPS=2275, BW=9099KiB/s (9318kB/s)(26.0MiB/2922msec) 00:19:14.193 slat (nsec): min=4087, max=63265, avg=10199.13, stdev=7059.43 00:19:14.193 clat (usec): min=163, max=41329, avg=423.59, stdev=2947.38 00:19:14.193 lat (usec): min=175, max=41346, avg=433.79, stdev=2947.98 00:19:14.193 clat percentiles (usec): 00:19:14.193 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:19:14.193 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:19:14.193 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 255], 95.00th=[ 277], 00:19:14.193 | 99.00th=[ 338], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:19:14.193 | 99.99th=[41157] 00:19:14.193 bw ( KiB/s): min= 128, max=18880, per=64.73%, avg=10476.80, stdev=9078.83, samples=5 00:19:14.193 iops : min= 32, max= 4720, avg=2619.20, stdev=2269.71, samples=5 00:19:14.193 lat (usec) : 250=89.43%, 500=9.99%, 750=0.02% 00:19:14.193 lat (msec) : 2=0.02%, 4=0.02%, 50=0.53% 00:19:14.193 cpu : usr=1.06%, sys=2.74%, ctx=6649, majf=0, minf=1 00:19:14.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.193 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.193 issued rwts: total=6648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.193 00:19:14.193 Run status group 0 (all jobs): 00:19:14.193 READ: bw=15.8MiB/s (16.6MB/s), 101KiB/s-9.98MiB/s (104kB/s-10.5MB/s), io=58.4MiB (61.3MB), run=2922-3698msec 00:19:14.193 00:19:14.193 Disk stats (read/write): 00:19:14.193 nvme0n1: ios=85/0, merge=0/0, ticks=3319/0, in_queue=3319, util=95.59% 00:19:14.193 nvme0n2: ios=151/0, merge=0/0, ticks=3574/0, in_queue=3574, util=96.49% 00:19:14.193 nvme0n3: ios=7945/0, merge=0/0, ticks=3322/0, in_queue=3322, util=99.28% 00:19:14.193 nvme0n4: ios=6645/0, merge=0/0, ticks=2686/0, in_queue=2686, util=96.78% 00:19:14.451 19:28:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.451 19:28:12 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:14.709 19:28:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.709 19:28:12 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:14.968 19:28:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.968 19:28:13 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:15.226 19:28:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.226 19:28:13 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:15.484 19:28:13 -- target/fio.sh@69 -- # fio_status=0 00:19:15.484 19:28:13 -- target/fio.sh@70 -- # wait 1216861 00:19:15.484 19:28:13 -- target/fio.sh@70 -- # fio_status=4 00:19:15.484 19:28:13 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:15.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.742 19:28:13 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:15.742 19:28:13 -- common/autotest_common.sh@1208 -- # local i=0 00:19:15.742 19:28:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:15.742 19:28:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.742 19:28:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:15.742 19:28:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.742 19:28:13 -- common/autotest_common.sh@1220 -- # return 0 00:19:15.742 19:28:13 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:15.742 19:28:13 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:15.742 nvmf hotplug test: fio failed as expected 00:19:15.742 19:28:13 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.000 19:28:14 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:16.000 19:28:14 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:16.000 19:28:14 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:16.000 19:28:14 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:16.000 19:28:14 -- target/fio.sh@91 -- # nvmftestfini 00:19:16.000 19:28:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.000 19:28:14 -- nvmf/common.sh@116 -- # sync 00:19:16.000 19:28:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:16.000 19:28:14 -- nvmf/common.sh@119 -- # set +e 00:19:16.000 19:28:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.000 19:28:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:16.000 rmmod nvme_tcp 00:19:16.000 rmmod nvme_fabrics 00:19:16.000 rmmod nvme_keyring 00:19:16.000 19:28:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.000 19:28:14 -- nvmf/common.sh@123 -- # set -e 00:19:16.000 19:28:14 -- nvmf/common.sh@124 -- # return 0 00:19:16.000 19:28:14 -- nvmf/common.sh@477 -- # '[' -n 1214747 ']' 00:19:16.000 19:28:14 -- nvmf/common.sh@478 -- # killprocess 1214747 00:19:16.000 19:28:14 -- common/autotest_common.sh@936 -- # '[' -z 1214747 ']' 00:19:16.000 19:28:14 -- common/autotest_common.sh@940 -- # kill -0 1214747 00:19:16.000 19:28:14 -- common/autotest_common.sh@941 -- # uname 00:19:16.000 19:28:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.000 19:28:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1214747 00:19:16.000 19:28:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:16.000 19:28:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:16.000 19:28:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1214747' 00:19:16.000 killing process with pid 1214747 00:19:16.000 19:28:14 -- common/autotest_common.sh@955 -- # kill 1214747 00:19:16.000 19:28:14 -- common/autotest_common.sh@960 -- # wait 1214747 00:19:16.259 19:28:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:16.259 19:28:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:16.259 19:28:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:16.259 19:28:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.259 19:28:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:16.259 19:28:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.259 19:28:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.259 19:28:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.867 19:28:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:18.867 00:19:18.868 real 0m24.162s 00:19:18.868 user 1m25.554s 00:19:18.868 sys 0m6.551s 00:19:18.868 19:28:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:18.868 19:28:16 -- common/autotest_common.sh@10 -- # set +x 00:19:18.868 ************************************ 00:19:18.868 END TEST nvmf_fio_target 00:19:18.868 ************************************ 00:19:18.868 19:28:16 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:18.868 19:28:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:18.868 19:28:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:18.868 19:28:16 -- common/autotest_common.sh@10 -- # set +x 00:19:18.868 ************************************ 00:19:18.868 START TEST nvmf_bdevio 00:19:18.868 ************************************ 00:19:18.868 19:28:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:18.868 * Looking for test storage... 00:19:18.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.868 19:28:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:18.868 19:28:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:18.868 19:28:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:18.868 19:28:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:18.868 19:28:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:18.868 19:28:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:18.868 19:28:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:18.868 19:28:16 -- scripts/common.sh@335 -- # IFS=.-: 00:19:18.868 19:28:16 -- scripts/common.sh@335 -- # read -ra ver1 00:19:18.868 19:28:16 -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.868 19:28:16 -- scripts/common.sh@336 -- # read -ra ver2 00:19:18.868 19:28:16 -- scripts/common.sh@337 -- # local 'op=<' 00:19:18.868 19:28:16 -- scripts/common.sh@339 -- # ver1_l=2 00:19:18.868 19:28:16 -- scripts/common.sh@340 -- # ver2_l=1 00:19:18.868 19:28:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:18.868 19:28:16 -- scripts/common.sh@343 -- # case "$op" in 00:19:18.868 19:28:16 -- scripts/common.sh@344 -- # : 1 00:19:18.868 19:28:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:18.868 19:28:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.868 19:28:16 -- scripts/common.sh@364 -- # decimal 1 00:19:18.868 19:28:16 -- scripts/common.sh@352 -- # local d=1 00:19:18.868 19:28:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.868 19:28:16 -- scripts/common.sh@354 -- # echo 1 00:19:18.868 19:28:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:18.868 19:28:16 -- scripts/common.sh@365 -- # decimal 2 00:19:18.868 19:28:16 -- scripts/common.sh@352 -- # local d=2 00:19:18.868 19:28:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.868 19:28:16 -- scripts/common.sh@354 -- # echo 2 00:19:18.868 19:28:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:18.868 19:28:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:18.868 19:28:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:18.868 19:28:16 -- scripts/common.sh@367 -- # return 0 00:19:18.868 19:28:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.868 19:28:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.868 --rc genhtml_branch_coverage=1 00:19:18.868 --rc genhtml_function_coverage=1 00:19:18.868 --rc genhtml_legend=1 00:19:18.868 --rc geninfo_all_blocks=1 00:19:18.868 --rc geninfo_unexecuted_blocks=1 00:19:18.868 00:19:18.868 ' 00:19:18.868 19:28:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.868 --rc genhtml_branch_coverage=1 00:19:18.868 --rc genhtml_function_coverage=1 00:19:18.868 --rc genhtml_legend=1 00:19:18.868 --rc geninfo_all_blocks=1 00:19:18.868 --rc geninfo_unexecuted_blocks=1 00:19:18.868 00:19:18.868 ' 00:19:18.868 19:28:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.868 --rc genhtml_branch_coverage=1 00:19:18.868 --rc genhtml_function_coverage=1 00:19:18.868 --rc genhtml_legend=1 00:19:18.868 --rc geninfo_all_blocks=1 00:19:18.868 --rc geninfo_unexecuted_blocks=1 00:19:18.868 00:19:18.868 ' 00:19:18.868 19:28:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:18.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.868 --rc genhtml_branch_coverage=1 00:19:18.868 --rc genhtml_function_coverage=1 00:19:18.868 --rc genhtml_legend=1 00:19:18.868 --rc geninfo_all_blocks=1 00:19:18.868 --rc geninfo_unexecuted_blocks=1 00:19:18.868 00:19:18.868 ' 00:19:18.868 19:28:16 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.868 19:28:16 -- nvmf/common.sh@7 -- # uname -s 00:19:18.868 19:28:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.868 19:28:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.868 19:28:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.868 19:28:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.868 19:28:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.868 19:28:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.868 19:28:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.868 19:28:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.868 19:28:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.868 19:28:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.868 19:28:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.868 19:28:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.868 19:28:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.868 19:28:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.868 19:28:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.868 19:28:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.868 19:28:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.868 19:28:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.868 19:28:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.868 19:28:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.868 19:28:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.868 19:28:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.869 19:28:16 -- paths/export.sh@5 -- # export PATH 00:19:18.869 19:28:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.869 19:28:16 -- nvmf/common.sh@46 -- # : 0 00:19:18.869 19:28:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:18.869 19:28:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:18.869 19:28:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:18.869 19:28:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.869 19:28:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.869 19:28:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:18.869 19:28:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:18.869 19:28:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:18.869 19:28:16 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:18.869 19:28:16 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:18.869 19:28:16 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:18.869 19:28:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:18.869 19:28:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.869 19:28:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:18.869 19:28:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:18.869 19:28:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:18.869 19:28:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.869 19:28:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.869 19:28:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.869 19:28:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:18.869 19:28:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:18.869 19:28:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:18.869 19:28:16 -- common/autotest_common.sh@10 -- # set +x 00:19:20.773 19:28:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:20.773 19:28:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:20.773 19:28:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:20.773 19:28:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:20.773 19:28:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:20.773 19:28:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:20.773 19:28:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:20.773 19:28:18 -- nvmf/common.sh@294 -- # net_devs=() 00:19:20.773 19:28:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:20.773 19:28:18 -- nvmf/common.sh@295 -- # e810=() 00:19:20.773 19:28:18 -- nvmf/common.sh@295 -- # local -ga e810 00:19:20.773 19:28:18 -- nvmf/common.sh@296 -- # x722=() 00:19:20.773 19:28:18 -- nvmf/common.sh@296 -- # local -ga x722 00:19:20.773 19:28:18 -- nvmf/common.sh@297 -- # mlx=() 00:19:20.773 19:28:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:20.773 19:28:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.773 19:28:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:20.773 19:28:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:20.773 19:28:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:20.773 19:28:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:20.773 19:28:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:20.773 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:20.773 19:28:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:20.773 19:28:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:20.773 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:20.773 19:28:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:20.773 19:28:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:20.773 19:28:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.773 19:28:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:20.773 19:28:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.773 19:28:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:20.773 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:20.773 19:28:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.773 19:28:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:20.773 19:28:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.773 19:28:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:20.773 19:28:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.773 19:28:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:20.773 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:20.773 19:28:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.773 19:28:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:20.773 19:28:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:20.773 19:28:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:20.773 19:28:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.773 19:28:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.773 19:28:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.773 19:28:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:20.773 19:28:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.773 19:28:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.773 19:28:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:20.773 19:28:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.773 19:28:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.773 19:28:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:20.773 19:28:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:20.773 19:28:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.773 19:28:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.773 19:28:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.773 19:28:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.773 19:28:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:20.773 19:28:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.773 19:28:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.773 19:28:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.773 19:28:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:20.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:19:20.773 00:19:20.773 --- 10.0.0.2 ping statistics --- 00:19:20.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.773 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:19:20.773 19:28:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:19:20.773 00:19:20.773 --- 10.0.0.1 ping statistics --- 00:19:20.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.773 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:20.773 19:28:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.773 19:28:18 -- nvmf/common.sh@410 -- # return 0 00:19:20.773 19:28:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:20.773 19:28:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.773 19:28:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:20.773 19:28:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.773 19:28:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:20.773 19:28:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:20.773 19:28:18 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:20.773 19:28:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:20.773 19:28:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:20.773 19:28:18 -- common/autotest_common.sh@10 -- # set +x 00:19:20.773 19:28:18 -- nvmf/common.sh@469 -- # nvmfpid=1219622 00:19:20.773 19:28:18 -- nvmf/common.sh@470 -- # waitforlisten 1219622 00:19:20.773 19:28:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:20.773 19:28:18 -- common/autotest_common.sh@829 -- # '[' -z 1219622 ']' 00:19:20.773 19:28:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.773 19:28:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.773 19:28:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.773 19:28:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.773 19:28:18 -- common/autotest_common.sh@10 -- # set +x 00:19:20.773 [2024-11-17 19:28:18.821274] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:20.773 [2024-11-17 19:28:18.821356] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.773 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.773 [2024-11-17 19:28:18.890374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.773 [2024-11-17 19:28:18.983501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:20.774 [2024-11-17 19:28:18.983664] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.774 [2024-11-17 19:28:18.983693] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.774 [2024-11-17 19:28:18.983709] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.774 [2024-11-17 19:28:18.983798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:20.774 [2024-11-17 19:28:18.983995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:20.774 [2024-11-17 19:28:18.984016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:20.774 [2024-11-17 19:28:18.984019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.712 19:28:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.712 19:28:19 -- common/autotest_common.sh@862 -- # return 0 00:19:21.712 19:28:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:21.712 19:28:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:21.712 19:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.712 19:28:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.712 19:28:19 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.712 19:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.712 19:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.712 [2024-11-17 19:28:19.794174] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.712 19:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.712 19:28:19 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.712 19:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.712 19:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.712 Malloc0 00:19:21.712 19:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.712 19:28:19 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.712 19:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.712 19:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.712 19:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.712 19:28:19 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.712 19:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.712 19:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.712 19:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.712 19:28:19 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.712 19:28:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.712 19:28:19 -- common/autotest_common.sh@10 -- # set +x 00:19:21.712 [2024-11-17 19:28:19.845319] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.712 19:28:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.712 19:28:19 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:21.712 19:28:19 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:21.712 19:28:19 -- nvmf/common.sh@520 -- # config=() 00:19:21.712 19:28:19 -- nvmf/common.sh@520 -- # local subsystem config 00:19:21.712 19:28:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:21.712 19:28:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:21.712 { 00:19:21.712 "params": { 00:19:21.712 "name": "Nvme$subsystem", 00:19:21.712 "trtype": "$TEST_TRANSPORT", 00:19:21.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.712 "adrfam": "ipv4", 00:19:21.712 "trsvcid": "$NVMF_PORT", 00:19:21.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.712 "hdgst": ${hdgst:-false}, 00:19:21.712 "ddgst": ${ddgst:-false} 00:19:21.712 }, 00:19:21.712 "method": "bdev_nvme_attach_controller" 00:19:21.712 } 00:19:21.712 EOF 00:19:21.712 )") 00:19:21.712 19:28:19 -- nvmf/common.sh@542 -- # cat 00:19:21.712 19:28:19 -- nvmf/common.sh@544 -- # jq . 00:19:21.712 19:28:19 -- nvmf/common.sh@545 -- # IFS=, 00:19:21.712 19:28:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:21.712 "params": { 00:19:21.712 "name": "Nvme1", 00:19:21.712 "trtype": "tcp", 00:19:21.712 "traddr": "10.0.0.2", 00:19:21.712 "adrfam": "ipv4", 00:19:21.712 "trsvcid": "4420", 00:19:21.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.712 "hdgst": false, 00:19:21.712 "ddgst": false 00:19:21.712 }, 00:19:21.712 "method": "bdev_nvme_attach_controller" 00:19:21.712 }' 00:19:21.712 [2024-11-17 19:28:19.886143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:21.712 [2024-11-17 19:28:19.886232] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219783 ] 00:19:21.712 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.712 [2024-11-17 19:28:19.947008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.970 [2024-11-17 19:28:20.041486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.970 [2024-11-17 19:28:20.041534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.970 [2024-11-17 19:28:20.041537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.228 [2024-11-17 19:28:20.377432] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:22.228 [2024-11-17 19:28:20.377481] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:22.228 I/O targets: 00:19:22.228 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:22.228 00:19:22.228 00:19:22.228 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.228 http://cunit.sourceforge.net/ 00:19:22.228 00:19:22.228 00:19:22.228 Suite: bdevio tests on: Nvme1n1 00:19:22.228 Test: blockdev write read block ...passed 00:19:22.228 Test: blockdev write zeroes read block ...passed 00:19:22.228 Test: blockdev write zeroes read no split ...passed 00:19:22.486 Test: blockdev write zeroes read split ...passed 00:19:22.486 Test: blockdev write zeroes read split partial ...passed 00:19:22.486 Test: blockdev reset ...[2024-11-17 19:28:20.530010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.486 [2024-11-17 19:28:20.530113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa0e40 (9): Bad file descriptor 00:19:22.486 [2024-11-17 19:28:20.627617] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.486 passed 00:19:22.486 Test: blockdev write read 8 blocks ...passed 00:19:22.486 Test: blockdev write read size > 128k ...passed 00:19:22.486 Test: blockdev write read invalid size ...passed 00:19:22.486 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.486 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.486 Test: blockdev write read max offset ...passed 00:19:22.744 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.744 Test: blockdev writev readv 8 blocks ...passed 00:19:22.744 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.744 Test: blockdev writev readv block ...passed 00:19:22.744 Test: blockdev writev readv size > 128k ...passed 00:19:22.744 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.744 Test: blockdev comparev and writev ...[2024-11-17 19:28:20.880618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.744 [2024-11-17 19:28:20.880654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.744 [2024-11-17 19:28:20.880686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.744 [2024-11-17 19:28:20.880706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.744 [2024-11-17 19:28:20.881017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.744 [2024-11-17 19:28:20.881043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.744 [2024-11-17 19:28:20.881065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.745 [2024-11-17 19:28:20.881081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.745 [2024-11-17 19:28:20.881368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.745 [2024-11-17 19:28:20.881392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.745 [2024-11-17 19:28:20.881413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.745 [2024-11-17 19:28:20.881429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.745 [2024-11-17 19:28:20.881741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.745 [2024-11-17 19:28:20.881766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.745 [2024-11-17 19:28:20.881786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.745 [2024-11-17 19:28:20.881802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:22.745 passed 00:19:22.745 Test: blockdev nvme passthru rw ...passed 00:19:22.745 Test: blockdev nvme passthru vendor specific ...[2024-11-17 19:28:20.963916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.745 [2024-11-17 19:28:20.963943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:22.745 [2024-11-17 19:28:20.964078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.745 [2024-11-17 19:28:20.964099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:22.745 [2024-11-17 19:28:20.964234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.745 [2024-11-17 19:28:20.964255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:22.745 [2024-11-17 19:28:20.964389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.745 [2024-11-17 19:28:20.964410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:22.745 passed 00:19:22.745 Test: blockdev nvme admin passthru ...passed 00:19:23.003 Test: blockdev copy ...passed 00:19:23.003 00:19:23.003 Run Summary: Type Total Ran Passed Failed Inactive 00:19:23.003 suites 1 1 n/a 0 0 00:19:23.003 tests 23 23 23 0 0 00:19:23.003 asserts 152 152 152 0 n/a 00:19:23.003 00:19:23.003 Elapsed time = 1.291 seconds 00:19:23.003 19:28:21 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.003 19:28:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.003 19:28:21 -- common/autotest_common.sh@10 -- # set +x 00:19:23.003 19:28:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.003 19:28:21 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:23.003 19:28:21 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:23.003 19:28:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:23.003 19:28:21 -- nvmf/common.sh@116 -- # sync 00:19:23.003 19:28:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:23.003 19:28:21 -- nvmf/common.sh@119 -- # set +e 00:19:23.003 19:28:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:23.003 19:28:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:23.003 rmmod nvme_tcp 00:19:23.003 rmmod nvme_fabrics 00:19:23.003 rmmod nvme_keyring 00:19:23.003 19:28:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:23.261 19:28:21 -- nvmf/common.sh@123 -- # set -e 00:19:23.261 19:28:21 -- nvmf/common.sh@124 -- # return 0 00:19:23.262 19:28:21 -- nvmf/common.sh@477 -- # '[' -n 1219622 ']' 00:19:23.262 19:28:21 -- nvmf/common.sh@478 -- # killprocess 1219622 00:19:23.262 19:28:21 -- common/autotest_common.sh@936 -- # '[' -z 1219622 ']' 00:19:23.262 19:28:21 -- common/autotest_common.sh@940 -- # kill -0 1219622 00:19:23.262 19:28:21 -- common/autotest_common.sh@941 -- # uname 00:19:23.262 19:28:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:23.262 19:28:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1219622 00:19:23.262 19:28:21 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:23.262 19:28:21 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:23.262 19:28:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1219622' 00:19:23.262 killing process with pid 1219622 00:19:23.262 19:28:21 -- common/autotest_common.sh@955 -- # kill 1219622 00:19:23.262 19:28:21 -- common/autotest_common.sh@960 -- # wait 1219622 00:19:23.521 19:28:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:23.521 19:28:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:23.521 19:28:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:23.521 19:28:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.521 19:28:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:23.521 19:28:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.521 19:28:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.521 19:28:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.421 19:28:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:25.421 00:19:25.421 real 0m7.087s 00:19:25.421 user 0m13.732s 00:19:25.421 sys 0m2.103s 00:19:25.421 19:28:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:25.421 19:28:23 -- common/autotest_common.sh@10 -- # set +x 00:19:25.421 ************************************ 00:19:25.421 END TEST nvmf_bdevio 00:19:25.421 ************************************ 00:19:25.421 19:28:23 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:25.421 19:28:23 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:25.421 19:28:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:25.421 19:28:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:25.421 19:28:23 -- common/autotest_common.sh@10 -- # set +x 00:19:25.421 ************************************ 00:19:25.421 START TEST nvmf_bdevio_no_huge 00:19:25.421 ************************************ 00:19:25.421 19:28:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:25.421 * Looking for test storage... 00:19:25.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.421 19:28:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:25.421 19:28:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:25.421 19:28:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:25.680 19:28:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:25.680 19:28:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:25.680 19:28:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:25.680 19:28:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:25.680 19:28:23 -- scripts/common.sh@335 -- # IFS=.-: 00:19:25.680 19:28:23 -- scripts/common.sh@335 -- # read -ra ver1 00:19:25.680 19:28:23 -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.680 19:28:23 -- scripts/common.sh@336 -- # read -ra ver2 00:19:25.680 19:28:23 -- scripts/common.sh@337 -- # local 'op=<' 00:19:25.680 19:28:23 -- scripts/common.sh@339 -- # ver1_l=2 00:19:25.680 19:28:23 -- scripts/common.sh@340 -- # ver2_l=1 00:19:25.680 19:28:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:25.680 19:28:23 -- scripts/common.sh@343 -- # case "$op" in 00:19:25.680 19:28:23 -- scripts/common.sh@344 -- # : 1 00:19:25.680 19:28:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:25.680 19:28:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.680 19:28:23 -- scripts/common.sh@364 -- # decimal 1 00:19:25.680 19:28:23 -- scripts/common.sh@352 -- # local d=1 00:19:25.680 19:28:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.680 19:28:23 -- scripts/common.sh@354 -- # echo 1 00:19:25.680 19:28:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:25.680 19:28:23 -- scripts/common.sh@365 -- # decimal 2 00:19:25.680 19:28:23 -- scripts/common.sh@352 -- # local d=2 00:19:25.680 19:28:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.680 19:28:23 -- scripts/common.sh@354 -- # echo 2 00:19:25.680 19:28:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:25.680 19:28:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:25.680 19:28:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:25.680 19:28:23 -- scripts/common.sh@367 -- # return 0 00:19:25.680 19:28:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.680 19:28:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.680 --rc genhtml_branch_coverage=1 00:19:25.680 --rc genhtml_function_coverage=1 00:19:25.680 --rc genhtml_legend=1 00:19:25.680 --rc geninfo_all_blocks=1 00:19:25.680 --rc geninfo_unexecuted_blocks=1 00:19:25.680 00:19:25.680 ' 00:19:25.680 19:28:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.680 --rc genhtml_branch_coverage=1 00:19:25.680 --rc genhtml_function_coverage=1 00:19:25.680 --rc genhtml_legend=1 00:19:25.680 --rc geninfo_all_blocks=1 00:19:25.680 --rc geninfo_unexecuted_blocks=1 00:19:25.681 00:19:25.681 ' 00:19:25.681 19:28:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.681 --rc genhtml_branch_coverage=1 00:19:25.681 --rc genhtml_function_coverage=1 00:19:25.681 --rc genhtml_legend=1 00:19:25.681 --rc geninfo_all_blocks=1 00:19:25.681 --rc geninfo_unexecuted_blocks=1 00:19:25.681 00:19:25.681 ' 00:19:25.681 19:28:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.681 --rc genhtml_branch_coverage=1 00:19:25.681 --rc genhtml_function_coverage=1 00:19:25.681 --rc genhtml_legend=1 00:19:25.681 --rc geninfo_all_blocks=1 00:19:25.681 --rc geninfo_unexecuted_blocks=1 00:19:25.681 00:19:25.681 ' 00:19:25.681 19:28:23 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.681 19:28:23 -- nvmf/common.sh@7 -- # uname -s 00:19:25.681 19:28:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.681 19:28:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.681 19:28:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.681 19:28:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.681 19:28:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.681 19:28:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.681 19:28:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.681 19:28:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.681 19:28:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.681 19:28:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.681 19:28:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.681 19:28:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.681 19:28:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.681 19:28:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.681 19:28:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.681 19:28:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.681 19:28:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.681 19:28:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.681 19:28:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.681 19:28:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.681 19:28:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.681 19:28:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.681 19:28:23 -- paths/export.sh@5 -- # export PATH 00:19:25.681 19:28:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.681 19:28:23 -- nvmf/common.sh@46 -- # : 0 00:19:25.681 19:28:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:25.681 19:28:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:25.681 19:28:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:25.681 19:28:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.681 19:28:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.681 19:28:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:25.681 19:28:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:25.681 19:28:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:25.681 19:28:23 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:25.681 19:28:23 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:25.681 19:28:23 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:25.681 19:28:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:25.681 19:28:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.681 19:28:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:25.681 19:28:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:25.681 19:28:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:25.681 19:28:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.681 19:28:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.681 19:28:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.681 19:28:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:25.681 19:28:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:25.681 19:28:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:25.681 19:28:23 -- common/autotest_common.sh@10 -- # set +x 00:19:28.211 19:28:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:28.211 19:28:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:28.211 19:28:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:28.211 19:28:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:28.211 19:28:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:28.211 19:28:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:28.211 19:28:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:28.211 19:28:25 -- nvmf/common.sh@294 -- # net_devs=() 00:19:28.211 19:28:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:28.211 19:28:25 -- nvmf/common.sh@295 -- # e810=() 00:19:28.211 19:28:25 -- nvmf/common.sh@295 -- # local -ga e810 00:19:28.211 19:28:25 -- nvmf/common.sh@296 -- # x722=() 00:19:28.211 19:28:25 -- nvmf/common.sh@296 -- # local -ga x722 00:19:28.211 19:28:25 -- nvmf/common.sh@297 -- # mlx=() 00:19:28.211 19:28:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:28.211 19:28:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.211 19:28:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:28.211 19:28:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:28.211 19:28:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:28.211 19:28:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:28.212 19:28:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:28.212 19:28:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.212 19:28:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:28.212 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:28.212 19:28:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.212 19:28:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:28.212 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:28.212 19:28:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:28.212 19:28:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.212 19:28:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.212 19:28:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.212 19:28:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.212 19:28:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:28.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:28.212 19:28:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.212 19:28:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.212 19:28:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.212 19:28:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.212 19:28:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.212 19:28:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:28.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:28.212 19:28:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.212 19:28:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:28.212 19:28:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:28.212 19:28:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:28.212 19:28:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:28.212 19:28:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.212 19:28:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.212 19:28:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.212 19:28:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:28.212 19:28:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.212 19:28:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.212 19:28:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:28.212 19:28:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.212 19:28:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.212 19:28:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:28.212 19:28:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:28.212 19:28:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.212 19:28:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.212 19:28:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.212 19:28:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.212 19:28:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:28.212 19:28:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.212 19:28:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.212 19:28:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.212 19:28:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:28.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:19:28.212 00:19:28.212 --- 10.0.0.2 ping statistics --- 00:19:28.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.212 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:28.212 19:28:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:19:28.212 00:19:28.212 --- 10.0.0.1 ping statistics --- 00:19:28.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.212 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:19:28.212 19:28:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.212 19:28:26 -- nvmf/common.sh@410 -- # return 0 00:19:28.212 19:28:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:28.212 19:28:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.212 19:28:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:28.212 19:28:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:28.212 19:28:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.212 19:28:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:28.212 19:28:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:28.212 19:28:26 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:28.212 19:28:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:28.212 19:28:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:28.212 19:28:26 -- common/autotest_common.sh@10 -- # set +x 00:19:28.212 19:28:26 -- nvmf/common.sh@469 -- # nvmfpid=1221874 00:19:28.212 19:28:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:28.212 19:28:26 -- nvmf/common.sh@470 -- # waitforlisten 1221874 00:19:28.212 19:28:26 -- common/autotest_common.sh@829 -- # '[' -z 1221874 ']' 00:19:28.212 19:28:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.212 19:28:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.212 19:28:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.212 19:28:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.212 19:28:26 -- common/autotest_common.sh@10 -- # set +x 00:19:28.212 [2024-11-17 19:28:26.113751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:28.212 [2024-11-17 19:28:26.113822] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:28.212 [2024-11-17 19:28:26.183670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:28.212 [2024-11-17 19:28:26.266335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:28.212 [2024-11-17 19:28:26.266486] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.212 [2024-11-17 19:28:26.266505] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.212 [2024-11-17 19:28:26.266519] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.212 [2024-11-17 19:28:26.266581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:28.212 [2024-11-17 19:28:26.266621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:28.213 [2024-11-17 19:28:26.266644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:28.213 [2024-11-17 19:28:26.266647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.146 19:28:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.146 19:28:27 -- common/autotest_common.sh@862 -- # return 0 00:19:29.146 19:28:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:29.146 19:28:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:29.146 19:28:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.146 19:28:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.146 19:28:27 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:29.146 19:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.146 19:28:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.146 [2024-11-17 19:28:27.166488] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.146 19:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.146 19:28:27 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:29.146 19:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.146 19:28:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.146 Malloc0 00:19:29.146 19:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.146 19:28:27 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.146 19:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.146 19:28:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.146 19:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.146 19:28:27 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:29.147 19:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.147 19:28:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.147 19:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.147 19:28:27 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.147 19:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.147 19:28:27 -- common/autotest_common.sh@10 -- # set +x 00:19:29.147 [2024-11-17 19:28:27.204868] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.147 19:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.147 19:28:27 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:29.147 19:28:27 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:29.147 19:28:27 -- nvmf/common.sh@520 -- # config=() 00:19:29.147 19:28:27 -- nvmf/common.sh@520 -- # local subsystem config 00:19:29.147 19:28:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:29.147 19:28:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:29.147 { 00:19:29.147 "params": { 00:19:29.147 "name": "Nvme$subsystem", 00:19:29.147 "trtype": "$TEST_TRANSPORT", 00:19:29.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.147 "adrfam": "ipv4", 00:19:29.147 "trsvcid": "$NVMF_PORT", 00:19:29.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.147 "hdgst": ${hdgst:-false}, 00:19:29.147 "ddgst": ${ddgst:-false} 00:19:29.147 }, 00:19:29.147 "method": "bdev_nvme_attach_controller" 00:19:29.147 } 00:19:29.147 EOF 00:19:29.147 )") 00:19:29.147 19:28:27 -- nvmf/common.sh@542 -- # cat 00:19:29.147 19:28:27 -- nvmf/common.sh@544 -- # jq . 00:19:29.147 19:28:27 -- nvmf/common.sh@545 -- # IFS=, 00:19:29.147 19:28:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:29.147 "params": { 00:19:29.147 "name": "Nvme1", 00:19:29.147 "trtype": "tcp", 00:19:29.147 "traddr": "10.0.0.2", 00:19:29.147 "adrfam": "ipv4", 00:19:29.147 "trsvcid": "4420", 00:19:29.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.147 "hdgst": false, 00:19:29.147 "ddgst": false 00:19:29.147 }, 00:19:29.147 "method": "bdev_nvme_attach_controller" 00:19:29.147 }' 00:19:29.147 [2024-11-17 19:28:27.249671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:29.147 [2024-11-17 19:28:27.249765] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1222032 ] 00:19:29.147 [2024-11-17 19:28:27.309433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:29.147 [2024-11-17 19:28:27.396551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.147 [2024-11-17 19:28:27.396600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.147 [2024-11-17 19:28:27.396603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.405 [2024-11-17 19:28:27.589278] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:29.405 [2024-11-17 19:28:27.589324] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:29.405 I/O targets: 00:19:29.405 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:29.405 00:19:29.405 00:19:29.405 CUnit - A unit testing framework for C - Version 2.1-3 00:19:29.405 http://cunit.sourceforge.net/ 00:19:29.405 00:19:29.405 00:19:29.405 Suite: bdevio tests on: Nvme1n1 00:19:29.405 Test: blockdev write read block ...passed 00:19:29.663 Test: blockdev write zeroes read block ...passed 00:19:29.663 Test: blockdev write zeroes read no split ...passed 00:19:29.663 Test: blockdev write zeroes read split ...passed 00:19:29.663 Test: blockdev write zeroes read split partial ...passed 00:19:29.663 Test: blockdev reset ...[2024-11-17 19:28:27.707928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.663 [2024-11-17 19:28:27.708027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb07c0 (9): Bad file descriptor 00:19:29.663 [2024-11-17 19:28:27.818913] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:29.663 passed 00:19:29.663 Test: blockdev write read 8 blocks ...passed 00:19:29.663 Test: blockdev write read size > 128k ...passed 00:19:29.663 Test: blockdev write read invalid size ...passed 00:19:29.663 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:29.663 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:29.663 Test: blockdev write read max offset ...passed 00:19:29.921 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:29.921 Test: blockdev writev readv 8 blocks ...passed 00:19:29.921 Test: blockdev writev readv 30 x 1block ...passed 00:19:29.921 Test: blockdev writev readv block ...passed 00:19:29.921 Test: blockdev writev readv size > 128k ...passed 00:19:29.921 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:29.921 Test: blockdev comparev and writev ...[2024-11-17 19:28:28.069864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:29.921 [2024-11-17 19:28:28.069898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.069923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:29.921 [2024-11-17 19:28:28.069939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.070273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:29.921 [2024-11-17 19:28:28.070297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.070319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:29.921 [2024-11-17 19:28:28.070334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.070653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:29.921 [2024-11-17 19:28:28.070685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.070710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:29.921 [2024-11-17 19:28:28.070726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.071063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:29.921 [2024-11-17 19:28:28.071086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.071107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:29.921 [2024-11-17 19:28:28.071123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.921 passed 00:19:29.921 Test: blockdev nvme passthru rw ...passed 00:19:29.921 Test: blockdev nvme passthru vendor specific ...[2024-11-17 19:28:28.152919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:29.921 [2024-11-17 19:28:28.152947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.153081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:29.921 [2024-11-17 19:28:28.153104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.153241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:29.921 [2024-11-17 19:28:28.153269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.921 [2024-11-17 19:28:28.153401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:29.921 [2024-11-17 19:28:28.153423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:29.921 passed 00:19:29.921 Test: blockdev nvme admin passthru ...passed 00:19:30.179 Test: blockdev copy ...passed 00:19:30.179 00:19:30.179 Run Summary: Type Total Ran Passed Failed Inactive 00:19:30.179 suites 1 1 n/a 0 0 00:19:30.179 tests 23 23 23 0 0 00:19:30.179 asserts 152 152 152 0 n/a 00:19:30.179 00:19:30.179 Elapsed time = 1.241 seconds 00:19:30.437 19:28:28 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.437 19:28:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.437 19:28:28 -- common/autotest_common.sh@10 -- # set +x 00:19:30.437 19:28:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.437 19:28:28 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:30.437 19:28:28 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:30.437 19:28:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:30.437 19:28:28 -- nvmf/common.sh@116 -- # sync 00:19:30.437 19:28:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:30.437 19:28:28 -- nvmf/common.sh@119 -- # set +e 00:19:30.437 19:28:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:30.437 19:28:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:30.437 rmmod nvme_tcp 00:19:30.437 rmmod nvme_fabrics 00:19:30.437 rmmod nvme_keyring 00:19:30.437 19:28:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:30.437 19:28:28 -- nvmf/common.sh@123 -- # set -e 00:19:30.437 19:28:28 -- nvmf/common.sh@124 -- # return 0 00:19:30.437 19:28:28 -- nvmf/common.sh@477 -- # '[' -n 1221874 ']' 00:19:30.437 19:28:28 -- nvmf/common.sh@478 -- # killprocess 1221874 00:19:30.437 19:28:28 -- common/autotest_common.sh@936 -- # '[' -z 1221874 ']' 00:19:30.437 19:28:28 -- common/autotest_common.sh@940 -- # kill -0 1221874 00:19:30.437 19:28:28 -- common/autotest_common.sh@941 -- # uname 00:19:30.437 19:28:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.437 19:28:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1221874 00:19:30.437 19:28:28 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:30.437 19:28:28 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:30.437 19:28:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1221874' 00:19:30.437 killing process with pid 1221874 00:19:30.437 19:28:28 -- common/autotest_common.sh@955 -- # kill 1221874 00:19:30.437 19:28:28 -- common/autotest_common.sh@960 -- # wait 1221874 00:19:31.006 19:28:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:31.006 19:28:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:31.006 19:28:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:31.006 19:28:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.006 19:28:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:31.006 19:28:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.006 19:28:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.006 19:28:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.906 19:28:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:32.906 00:19:32.906 real 0m7.438s 00:19:32.906 user 0m13.950s 00:19:32.906 sys 0m2.633s 00:19:32.906 19:28:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:32.906 19:28:31 -- common/autotest_common.sh@10 -- # set +x 00:19:32.906 ************************************ 00:19:32.906 END TEST nvmf_bdevio_no_huge 00:19:32.906 ************************************ 00:19:32.906 19:28:31 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:32.906 19:28:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:32.906 19:28:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:32.906 19:28:31 -- common/autotest_common.sh@10 -- # set +x 00:19:32.906 ************************************ 00:19:32.906 START TEST nvmf_tls 00:19:32.906 ************************************ 00:19:32.906 19:28:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:32.906 * Looking for test storage... 00:19:32.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.906 19:28:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:32.906 19:28:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:32.906 19:28:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:33.164 19:28:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:33.164 19:28:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:33.164 19:28:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:33.164 19:28:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:33.164 19:28:31 -- scripts/common.sh@335 -- # IFS=.-: 00:19:33.164 19:28:31 -- scripts/common.sh@335 -- # read -ra ver1 00:19:33.164 19:28:31 -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.164 19:28:31 -- scripts/common.sh@336 -- # read -ra ver2 00:19:33.164 19:28:31 -- scripts/common.sh@337 -- # local 'op=<' 00:19:33.164 19:28:31 -- scripts/common.sh@339 -- # ver1_l=2 00:19:33.164 19:28:31 -- scripts/common.sh@340 -- # ver2_l=1 00:19:33.164 19:28:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:33.164 19:28:31 -- scripts/common.sh@343 -- # case "$op" in 00:19:33.164 19:28:31 -- scripts/common.sh@344 -- # : 1 00:19:33.164 19:28:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:33.164 19:28:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.164 19:28:31 -- scripts/common.sh@364 -- # decimal 1 00:19:33.164 19:28:31 -- scripts/common.sh@352 -- # local d=1 00:19:33.164 19:28:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.164 19:28:31 -- scripts/common.sh@354 -- # echo 1 00:19:33.164 19:28:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:33.164 19:28:31 -- scripts/common.sh@365 -- # decimal 2 00:19:33.164 19:28:31 -- scripts/common.sh@352 -- # local d=2 00:19:33.164 19:28:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.164 19:28:31 -- scripts/common.sh@354 -- # echo 2 00:19:33.164 19:28:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:33.165 19:28:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:33.165 19:28:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:33.165 19:28:31 -- scripts/common.sh@367 -- # return 0 00:19:33.165 19:28:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.165 19:28:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:33.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.165 --rc genhtml_branch_coverage=1 00:19:33.165 --rc genhtml_function_coverage=1 00:19:33.165 --rc genhtml_legend=1 00:19:33.165 --rc geninfo_all_blocks=1 00:19:33.165 --rc geninfo_unexecuted_blocks=1 00:19:33.165 00:19:33.165 ' 00:19:33.165 19:28:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:33.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.165 --rc genhtml_branch_coverage=1 00:19:33.165 --rc genhtml_function_coverage=1 00:19:33.165 --rc genhtml_legend=1 00:19:33.165 --rc geninfo_all_blocks=1 00:19:33.165 --rc geninfo_unexecuted_blocks=1 00:19:33.165 00:19:33.165 ' 00:19:33.165 19:28:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:33.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.165 --rc genhtml_branch_coverage=1 00:19:33.165 --rc genhtml_function_coverage=1 00:19:33.165 --rc genhtml_legend=1 00:19:33.165 --rc geninfo_all_blocks=1 00:19:33.165 --rc geninfo_unexecuted_blocks=1 00:19:33.165 00:19:33.165 ' 00:19:33.165 19:28:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:33.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.165 --rc genhtml_branch_coverage=1 00:19:33.165 --rc genhtml_function_coverage=1 00:19:33.165 --rc genhtml_legend=1 00:19:33.165 --rc geninfo_all_blocks=1 00:19:33.165 --rc geninfo_unexecuted_blocks=1 00:19:33.165 00:19:33.165 ' 00:19:33.165 19:28:31 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.165 19:28:31 -- nvmf/common.sh@7 -- # uname -s 00:19:33.165 19:28:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.165 19:28:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.165 19:28:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.165 19:28:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.165 19:28:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.165 19:28:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.165 19:28:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.165 19:28:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.165 19:28:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.165 19:28:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.165 19:28:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.165 19:28:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.165 19:28:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.165 19:28:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.165 19:28:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.165 19:28:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.165 19:28:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.165 19:28:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.165 19:28:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.165 19:28:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.165 19:28:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.165 19:28:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.165 19:28:31 -- paths/export.sh@5 -- # export PATH 00:19:33.165 19:28:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.165 19:28:31 -- nvmf/common.sh@46 -- # : 0 00:19:33.165 19:28:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:33.165 19:28:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:33.165 19:28:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:33.165 19:28:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.165 19:28:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.165 19:28:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:33.165 19:28:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:33.165 19:28:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:33.165 19:28:31 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:33.165 19:28:31 -- target/tls.sh@71 -- # nvmftestinit 00:19:33.165 19:28:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:33.165 19:28:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.165 19:28:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:33.165 19:28:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:33.165 19:28:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:33.165 19:28:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.165 19:28:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.165 19:28:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.165 19:28:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:33.165 19:28:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:33.165 19:28:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:33.165 19:28:31 -- common/autotest_common.sh@10 -- # set +x 00:19:35.064 19:28:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:35.064 19:28:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:35.064 19:28:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:35.064 19:28:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:35.064 19:28:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:35.064 19:28:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:35.064 19:28:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:35.064 19:28:33 -- nvmf/common.sh@294 -- # net_devs=() 00:19:35.064 19:28:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:35.064 19:28:33 -- nvmf/common.sh@295 -- # e810=() 00:19:35.064 19:28:33 -- nvmf/common.sh@295 -- # local -ga e810 00:19:35.064 19:28:33 -- nvmf/common.sh@296 -- # x722=() 00:19:35.064 19:28:33 -- nvmf/common.sh@296 -- # local -ga x722 00:19:35.064 19:28:33 -- nvmf/common.sh@297 -- # mlx=() 00:19:35.064 19:28:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:35.064 19:28:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.064 19:28:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.065 19:28:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:35.065 19:28:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:35.065 19:28:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:35.065 19:28:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:35.065 19:28:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:35.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:35.065 19:28:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:35.065 19:28:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:35.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:35.065 19:28:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:35.065 19:28:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:35.065 19:28:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.065 19:28:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:35.065 19:28:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.065 19:28:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:35.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:35.065 19:28:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.065 19:28:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:35.065 19:28:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.065 19:28:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:35.065 19:28:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.065 19:28:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:35.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:35.065 19:28:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.065 19:28:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:35.065 19:28:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:35.065 19:28:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:35.065 19:28:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:35.065 19:28:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.065 19:28:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.065 19:28:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.065 19:28:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:35.065 19:28:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.065 19:28:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.065 19:28:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:35.065 19:28:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.065 19:28:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.065 19:28:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:35.065 19:28:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:35.065 19:28:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.065 19:28:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.065 19:28:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.065 19:28:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.065 19:28:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:35.065 19:28:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.065 19:28:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.324 19:28:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.324 19:28:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:35.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:19:35.324 00:19:35.324 --- 10.0.0.2 ping statistics --- 00:19:35.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.324 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:19:35.324 19:28:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:35.324 00:19:35.324 --- 10.0.0.1 ping statistics --- 00:19:35.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.324 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:35.324 19:28:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.324 19:28:33 -- nvmf/common.sh@410 -- # return 0 00:19:35.324 19:28:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:35.324 19:28:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.324 19:28:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:35.324 19:28:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:35.324 19:28:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.324 19:28:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:35.324 19:28:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:35.324 19:28:33 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:35.324 19:28:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:35.324 19:28:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.324 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:35.324 19:28:33 -- nvmf/common.sh@469 -- # nvmfpid=1224246 00:19:35.324 19:28:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:35.324 19:28:33 -- nvmf/common.sh@470 -- # waitforlisten 1224246 00:19:35.324 19:28:33 -- common/autotest_common.sh@829 -- # '[' -z 1224246 ']' 00:19:35.324 19:28:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.324 19:28:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.324 19:28:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.324 19:28:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.324 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:35.324 [2024-11-17 19:28:33.416908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:35.324 [2024-11-17 19:28:33.416999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.324 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.324 [2024-11-17 19:28:33.487022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.324 [2024-11-17 19:28:33.575846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:35.324 [2024-11-17 19:28:33.576005] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.325 [2024-11-17 19:28:33.576022] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.325 [2024-11-17 19:28:33.576046] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.325 [2024-11-17 19:28:33.576079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.583 19:28:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.583 19:28:33 -- common/autotest_common.sh@862 -- # return 0 00:19:35.583 19:28:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:35.583 19:28:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:35.583 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:19:35.583 19:28:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.583 19:28:33 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:35.583 19:28:33 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:35.842 true 00:19:35.842 19:28:33 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:35.842 19:28:33 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:36.100 19:28:34 -- target/tls.sh@82 -- # version=0 00:19:36.100 19:28:34 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:36.100 19:28:34 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:36.359 19:28:34 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:36.359 19:28:34 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:36.617 19:28:34 -- target/tls.sh@90 -- # version=13 00:19:36.617 19:28:34 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:36.617 19:28:34 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:36.875 19:28:34 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:36.875 19:28:34 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:37.134 19:28:35 -- target/tls.sh@98 -- # version=7 00:19:37.134 19:28:35 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:37.134 19:28:35 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:37.134 19:28:35 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:37.392 19:28:35 -- target/tls.sh@105 -- # ktls=false 00:19:37.392 19:28:35 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:37.392 19:28:35 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:37.651 19:28:35 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:37.651 19:28:35 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:37.651 19:28:35 -- target/tls.sh@113 -- # ktls=true 00:19:37.651 19:28:35 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:37.651 19:28:35 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:37.909 19:28:36 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:37.909 19:28:36 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:38.168 19:28:36 -- target/tls.sh@121 -- # ktls=false 00:19:38.168 19:28:36 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:38.168 19:28:36 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:38.168 19:28:36 -- target/tls.sh@49 -- # local key hash crc 00:19:38.168 19:28:36 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:38.168 19:28:36 -- target/tls.sh@51 -- # hash=01 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # gzip -1 -c 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # tail -c8 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # head -c 4 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # crc='p$H�' 00:19:38.168 19:28:36 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:38.168 19:28:36 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:38.168 19:28:36 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:38.168 19:28:36 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:38.168 19:28:36 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:38.168 19:28:36 -- target/tls.sh@49 -- # local key hash crc 00:19:38.168 19:28:36 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:38.168 19:28:36 -- target/tls.sh@51 -- # hash=01 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # gzip -1 -c 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # tail -c8 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # head -c 4 00:19:38.168 19:28:36 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:38.168 19:28:36 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:38.168 19:28:36 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:38.426 19:28:36 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:38.426 19:28:36 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:38.426 19:28:36 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:38.426 19:28:36 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:38.426 19:28:36 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:38.426 19:28:36 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:38.426 19:28:36 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:38.426 19:28:36 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:38.426 19:28:36 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:38.685 19:28:36 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:38.944 19:28:37 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:38.944 19:28:37 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:38.944 19:28:37 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:39.202 [2024-11-17 19:28:37.318344] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.202 19:28:37 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:39.461 19:28:37 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:39.719 [2024-11-17 19:28:37.887876] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.719 [2024-11-17 19:28:37.888138] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.719 19:28:37 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:39.977 malloc0 00:19:39.977 19:28:38 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:40.235 19:28:38 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.493 19:28:38 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.493 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.697 Initializing NVMe Controllers 00:19:52.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:52.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:52.697 Initialization complete. Launching workers. 00:19:52.697 ======================================================== 00:19:52.697 Latency(us) 00:19:52.697 Device Information : IOPS MiB/s Average min max 00:19:52.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7830.57 30.59 8175.68 1127.08 9261.20 00:19:52.697 ======================================================== 00:19:52.697 Total : 7830.57 30.59 8175.68 1127.08 9261.20 00:19:52.697 00:19:52.697 19:28:48 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.697 19:28:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:52.697 19:28:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:52.697 19:28:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:52.697 19:28:48 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:52.697 19:28:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.697 19:28:48 -- target/tls.sh@28 -- # bdevperf_pid=1226081 00:19:52.697 19:28:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:52.697 19:28:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.697 19:28:48 -- target/tls.sh@31 -- # waitforlisten 1226081 /var/tmp/bdevperf.sock 00:19:52.697 19:28:48 -- common/autotest_common.sh@829 -- # '[' -z 1226081 ']' 00:19:52.697 19:28:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.697 19:28:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.697 19:28:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.697 19:28:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.697 19:28:48 -- common/autotest_common.sh@10 -- # set +x 00:19:52.697 [2024-11-17 19:28:48.854183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:52.697 [2024-11-17 19:28:48.854277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226081 ] 00:19:52.697 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.697 [2024-11-17 19:28:48.913957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.697 [2024-11-17 19:28:48.997949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.697 19:28:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.697 19:28:49 -- common/autotest_common.sh@862 -- # return 0 00:19:52.697 19:28:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.697 [2024-11-17 19:28:50.080345] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.697 TLSTESTn1 00:19:52.697 19:28:50 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:52.697 Running I/O for 10 seconds... 00:20:02.674 00:20:02.674 Latency(us) 00:20:02.674 [2024-11-17T18:29:00.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.674 [2024-11-17T18:29:00.941Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:02.674 Verification LBA range: start 0x0 length 0x2000 00:20:02.674 TLSTESTn1 : 10.02 4635.24 18.11 0.00 0.00 27574.97 5072.97 43108.12 00:20:02.674 [2024-11-17T18:29:00.941Z] =================================================================================================================== 00:20:02.674 [2024-11-17T18:29:00.941Z] Total : 4635.24 18.11 0.00 0.00 27574.97 5072.97 43108.12 00:20:02.674 0 00:20:02.674 19:29:00 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:02.674 19:29:00 -- target/tls.sh@45 -- # killprocess 1226081 00:20:02.674 19:29:00 -- common/autotest_common.sh@936 -- # '[' -z 1226081 ']' 00:20:02.674 19:29:00 -- common/autotest_common.sh@940 -- # kill -0 1226081 00:20:02.674 19:29:00 -- common/autotest_common.sh@941 -- # uname 00:20:02.674 19:29:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.674 19:29:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1226081 00:20:02.674 19:29:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:02.674 19:29:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:02.674 19:29:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1226081' 00:20:02.674 killing process with pid 1226081 00:20:02.674 19:29:00 -- common/autotest_common.sh@955 -- # kill 1226081 00:20:02.674 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.674 00:20:02.675 Latency(us) 00:20:02.675 [2024-11-17T18:29:00.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.675 [2024-11-17T18:29:00.942Z] =================================================================================================================== 00:20:02.675 [2024-11-17T18:29:00.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.675 19:29:00 -- common/autotest_common.sh@960 -- # wait 1226081 00:20:02.675 19:29:00 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:02.675 19:29:00 -- common/autotest_common.sh@650 -- # local es=0 00:20:02.675 19:29:00 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:02.675 19:29:00 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:02.675 19:29:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.675 19:29:00 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:02.675 19:29:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.675 19:29:00 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:02.675 19:29:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:02.675 19:29:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:02.675 19:29:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:02.675 19:29:00 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:02.675 19:29:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:02.675 19:29:00 -- target/tls.sh@28 -- # bdevperf_pid=1227568 00:20:02.675 19:29:00 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:02.675 19:29:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.675 19:29:00 -- target/tls.sh@31 -- # waitforlisten 1227568 /var/tmp/bdevperf.sock 00:20:02.675 19:29:00 -- common/autotest_common.sh@829 -- # '[' -z 1227568 ']' 00:20:02.675 19:29:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.675 19:29:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.675 19:29:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.675 19:29:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.675 19:29:00 -- common/autotest_common.sh@10 -- # set +x 00:20:02.675 [2024-11-17 19:29:00.643391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:02.675 [2024-11-17 19:29:00.643487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227568 ] 00:20:02.675 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.675 [2024-11-17 19:29:00.702209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.675 [2024-11-17 19:29:00.785527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.609 19:29:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.609 19:29:01 -- common/autotest_common.sh@862 -- # return 0 00:20:03.609 19:29:01 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:03.868 [2024-11-17 19:29:01.919774] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.868 [2024-11-17 19:29:01.929153] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:03.868 [2024-11-17 19:29:01.929486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a70840 (107): Transport endpoint is not connected 00:20:03.868 [2024-11-17 19:29:01.930475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a70840 (9): Bad file descriptor 00:20:03.868 [2024-11-17 19:29:01.931474] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:03.868 [2024-11-17 19:29:01.931495] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:03.868 [2024-11-17 19:29:01.931508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:03.868 request: 00:20:03.868 { 00:20:03.868 "name": "TLSTEST", 00:20:03.868 "trtype": "tcp", 00:20:03.868 "traddr": "10.0.0.2", 00:20:03.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.868 "adrfam": "ipv4", 00:20:03.868 "trsvcid": "4420", 00:20:03.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.868 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:03.868 "method": "bdev_nvme_attach_controller", 00:20:03.868 "req_id": 1 00:20:03.868 } 00:20:03.868 Got JSON-RPC error response 00:20:03.868 response: 00:20:03.868 { 00:20:03.868 "code": -32602, 00:20:03.868 "message": "Invalid parameters" 00:20:03.868 } 00:20:03.868 19:29:01 -- target/tls.sh@36 -- # killprocess 1227568 00:20:03.868 19:29:01 -- common/autotest_common.sh@936 -- # '[' -z 1227568 ']' 00:20:03.868 19:29:01 -- common/autotest_common.sh@940 -- # kill -0 1227568 00:20:03.868 19:29:01 -- common/autotest_common.sh@941 -- # uname 00:20:03.868 19:29:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:03.868 19:29:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1227568 00:20:03.868 19:29:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:03.868 19:29:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:03.868 19:29:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1227568' 00:20:03.868 killing process with pid 1227568 00:20:03.868 19:29:01 -- common/autotest_common.sh@955 -- # kill 1227568 00:20:03.868 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.868 00:20:03.868 Latency(us) 00:20:03.868 [2024-11-17T18:29:02.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.868 [2024-11-17T18:29:02.135Z] =================================================================================================================== 00:20:03.868 [2024-11-17T18:29:02.135Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.868 19:29:01 -- common/autotest_common.sh@960 -- # wait 1227568 00:20:04.126 19:29:02 -- target/tls.sh@37 -- # return 1 00:20:04.126 19:29:02 -- common/autotest_common.sh@653 -- # es=1 00:20:04.126 19:29:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:04.126 19:29:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:04.126 19:29:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:04.126 19:29:02 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.126 19:29:02 -- common/autotest_common.sh@650 -- # local es=0 00:20:04.126 19:29:02 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.126 19:29:02 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:04.127 19:29:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.127 19:29:02 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:04.127 19:29:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.127 19:29:02 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.127 19:29:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.127 19:29:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.127 19:29:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:04.127 19:29:02 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:04.127 19:29:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.127 19:29:02 -- target/tls.sh@28 -- # bdevperf_pid=1227726 00:20:04.127 19:29:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.127 19:29:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.127 19:29:02 -- target/tls.sh@31 -- # waitforlisten 1227726 /var/tmp/bdevperf.sock 00:20:04.127 19:29:02 -- common/autotest_common.sh@829 -- # '[' -z 1227726 ']' 00:20:04.127 19:29:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.127 19:29:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.127 19:29:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.127 19:29:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.127 19:29:02 -- common/autotest_common.sh@10 -- # set +x 00:20:04.127 [2024-11-17 19:29:02.249589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:04.127 [2024-11-17 19:29:02.249694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227726 ] 00:20:04.127 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.127 [2024-11-17 19:29:02.306071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.127 [2024-11-17 19:29:02.389111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.061 19:29:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.061 19:29:03 -- common/autotest_common.sh@862 -- # return 0 00:20:05.061 19:29:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:05.320 [2024-11-17 19:29:03.432357] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.320 [2024-11-17 19:29:03.442966] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:05.320 [2024-11-17 19:29:03.443002] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:05.320 [2024-11-17 19:29:03.443058] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:05.320 [2024-11-17 19:29:03.443182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2d840 (107): Transport endpoint is not connected 00:20:05.320 [2024-11-17 19:29:03.444166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2d840 (9): Bad file descriptor 00:20:05.320 [2024-11-17 19:29:03.445165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.320 [2024-11-17 19:29:03.445187] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:05.320 [2024-11-17 19:29:03.445200] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.320 request: 00:20:05.320 { 00:20:05.320 "name": "TLSTEST", 00:20:05.320 "trtype": "tcp", 00:20:05.320 "traddr": "10.0.0.2", 00:20:05.320 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:05.320 "adrfam": "ipv4", 00:20:05.320 "trsvcid": "4420", 00:20:05.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.320 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:05.320 "method": "bdev_nvme_attach_controller", 00:20:05.320 "req_id": 1 00:20:05.320 } 00:20:05.320 Got JSON-RPC error response 00:20:05.320 response: 00:20:05.320 { 00:20:05.320 "code": -32602, 00:20:05.320 "message": "Invalid parameters" 00:20:05.320 } 00:20:05.320 19:29:03 -- target/tls.sh@36 -- # killprocess 1227726 00:20:05.320 19:29:03 -- common/autotest_common.sh@936 -- # '[' -z 1227726 ']' 00:20:05.320 19:29:03 -- common/autotest_common.sh@940 -- # kill -0 1227726 00:20:05.320 19:29:03 -- common/autotest_common.sh@941 -- # uname 00:20:05.320 19:29:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.320 19:29:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1227726 00:20:05.320 19:29:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:05.320 19:29:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:05.320 19:29:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1227726' 00:20:05.320 killing process with pid 1227726 00:20:05.320 19:29:03 -- common/autotest_common.sh@955 -- # kill 1227726 00:20:05.320 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.320 00:20:05.320 Latency(us) 00:20:05.320 [2024-11-17T18:29:03.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.320 [2024-11-17T18:29:03.587Z] =================================================================================================================== 00:20:05.320 [2024-11-17T18:29:03.587Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:05.320 19:29:03 -- common/autotest_common.sh@960 -- # wait 1227726 00:20:05.579 19:29:03 -- target/tls.sh@37 -- # return 1 00:20:05.579 19:29:03 -- common/autotest_common.sh@653 -- # es=1 00:20:05.579 19:29:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.579 19:29:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.579 19:29:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.579 19:29:03 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:05.579 19:29:03 -- common/autotest_common.sh@650 -- # local es=0 00:20:05.579 19:29:03 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:05.579 19:29:03 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:05.579 19:29:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.579 19:29:03 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:05.579 19:29:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.579 19:29:03 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:05.579 19:29:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.579 19:29:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:05.579 19:29:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.579 19:29:03 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:05.579 19:29:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.579 19:29:03 -- target/tls.sh@28 -- # bdevperf_pid=1227945 00:20:05.579 19:29:03 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.579 19:29:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.579 19:29:03 -- target/tls.sh@31 -- # waitforlisten 1227945 /var/tmp/bdevperf.sock 00:20:05.579 19:29:03 -- common/autotest_common.sh@829 -- # '[' -z 1227945 ']' 00:20:05.579 19:29:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.579 19:29:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:05.579 19:29:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.579 19:29:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:05.579 19:29:03 -- common/autotest_common.sh@10 -- # set +x 00:20:05.579 [2024-11-17 19:29:03.764250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:05.579 [2024-11-17 19:29:03.764348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227945 ] 00:20:05.579 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.579 [2024-11-17 19:29:03.822507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.867 [2024-11-17 19:29:03.904823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.458 19:29:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.458 19:29:04 -- common/autotest_common.sh@862 -- # return 0 00:20:06.458 19:29:04 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.716 [2024-11-17 19:29:04.959768] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.716 [2024-11-17 19:29:04.966912] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:06.716 [2024-11-17 19:29:04.966947] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:06.716 [2024-11-17 19:29:04.967001] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:06.716 [2024-11-17 19:29:04.967540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97b840 (107): Transport endpoint is not connected 00:20:06.716 [2024-11-17 19:29:04.968529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97b840 (9): Bad file descriptor 00:20:06.716 [2024-11-17 19:29:04.969528] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:06.716 [2024-11-17 19:29:04.969548] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:06.716 [2024-11-17 19:29:04.969561] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:06.716 request: 00:20:06.716 { 00:20:06.716 "name": "TLSTEST", 00:20:06.716 "trtype": "tcp", 00:20:06.716 "traddr": "10.0.0.2", 00:20:06.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.716 "adrfam": "ipv4", 00:20:06.716 "trsvcid": "4420", 00:20:06.716 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:06.716 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:06.716 "method": "bdev_nvme_attach_controller", 00:20:06.716 "req_id": 1 00:20:06.716 } 00:20:06.716 Got JSON-RPC error response 00:20:06.716 response: 00:20:06.716 { 00:20:06.716 "code": -32602, 00:20:06.716 "message": "Invalid parameters" 00:20:06.716 } 00:20:06.975 19:29:04 -- target/tls.sh@36 -- # killprocess 1227945 00:20:06.975 19:29:04 -- common/autotest_common.sh@936 -- # '[' -z 1227945 ']' 00:20:06.975 19:29:04 -- common/autotest_common.sh@940 -- # kill -0 1227945 00:20:06.975 19:29:04 -- common/autotest_common.sh@941 -- # uname 00:20:06.975 19:29:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.975 19:29:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1227945 00:20:06.975 19:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:06.975 19:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:06.975 19:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1227945' 00:20:06.975 killing process with pid 1227945 00:20:06.975 19:29:05 -- common/autotest_common.sh@955 -- # kill 1227945 00:20:06.975 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.975 00:20:06.975 Latency(us) 00:20:06.975 [2024-11-17T18:29:05.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.975 [2024-11-17T18:29:05.242Z] =================================================================================================================== 00:20:06.975 [2024-11-17T18:29:05.242Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.975 19:29:05 -- common/autotest_common.sh@960 -- # wait 1227945 00:20:07.233 19:29:05 -- target/tls.sh@37 -- # return 1 00:20:07.233 19:29:05 -- common/autotest_common.sh@653 -- # es=1 00:20:07.233 19:29:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:07.234 19:29:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:07.234 19:29:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:07.234 19:29:05 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:07.234 19:29:05 -- common/autotest_common.sh@650 -- # local es=0 00:20:07.234 19:29:05 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:07.234 19:29:05 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:07.234 19:29:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.234 19:29:05 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:07.234 19:29:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.234 19:29:05 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:07.234 19:29:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.234 19:29:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:07.234 19:29:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.234 19:29:05 -- target/tls.sh@23 -- # psk= 00:20:07.234 19:29:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.234 19:29:05 -- target/tls.sh@28 -- # bdevperf_pid=1228152 00:20:07.234 19:29:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.234 19:29:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.234 19:29:05 -- target/tls.sh@31 -- # waitforlisten 1228152 /var/tmp/bdevperf.sock 00:20:07.234 19:29:05 -- common/autotest_common.sh@829 -- # '[' -z 1228152 ']' 00:20:07.234 19:29:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.234 19:29:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.234 19:29:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.234 19:29:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.234 19:29:05 -- common/autotest_common.sh@10 -- # set +x 00:20:07.234 [2024-11-17 19:29:05.284602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:07.234 [2024-11-17 19:29:05.284692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228152 ] 00:20:07.234 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.234 [2024-11-17 19:29:05.340430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.234 [2024-11-17 19:29:05.421549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.169 19:29:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.169 19:29:06 -- common/autotest_common.sh@862 -- # return 0 00:20:08.169 19:29:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:08.428 [2024-11-17 19:29:06.470791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:08.428 [2024-11-17 19:29:06.472746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a0dd0 (9): Bad file descriptor 00:20:08.428 [2024-11-17 19:29:06.473743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:08.428 [2024-11-17 19:29:06.473765] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:08.428 [2024-11-17 19:29:06.473778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:08.428 request: 00:20:08.428 { 00:20:08.428 "name": "TLSTEST", 00:20:08.428 "trtype": "tcp", 00:20:08.428 "traddr": "10.0.0.2", 00:20:08.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.428 "adrfam": "ipv4", 00:20:08.428 "trsvcid": "4420", 00:20:08.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.428 "method": "bdev_nvme_attach_controller", 00:20:08.428 "req_id": 1 00:20:08.428 } 00:20:08.428 Got JSON-RPC error response 00:20:08.428 response: 00:20:08.428 { 00:20:08.428 "code": -32602, 00:20:08.428 "message": "Invalid parameters" 00:20:08.428 } 00:20:08.428 19:29:06 -- target/tls.sh@36 -- # killprocess 1228152 00:20:08.428 19:29:06 -- common/autotest_common.sh@936 -- # '[' -z 1228152 ']' 00:20:08.428 19:29:06 -- common/autotest_common.sh@940 -- # kill -0 1228152 00:20:08.428 19:29:06 -- common/autotest_common.sh@941 -- # uname 00:20:08.428 19:29:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.428 19:29:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1228152 00:20:08.428 19:29:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:08.428 19:29:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:08.428 19:29:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1228152' 00:20:08.428 killing process with pid 1228152 00:20:08.428 19:29:06 -- common/autotest_common.sh@955 -- # kill 1228152 00:20:08.428 Received shutdown signal, test time was about 10.000000 seconds 00:20:08.428 00:20:08.428 Latency(us) 00:20:08.428 [2024-11-17T18:29:06.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.428 [2024-11-17T18:29:06.695Z] =================================================================================================================== 00:20:08.428 [2024-11-17T18:29:06.695Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:08.428 19:29:06 -- common/autotest_common.sh@960 -- # wait 1228152 00:20:08.686 19:29:06 -- target/tls.sh@37 -- # return 1 00:20:08.686 19:29:06 -- common/autotest_common.sh@653 -- # es=1 00:20:08.686 19:29:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.686 19:29:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.686 19:29:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.686 19:29:06 -- target/tls.sh@167 -- # killprocess 1224246 00:20:08.686 19:29:06 -- common/autotest_common.sh@936 -- # '[' -z 1224246 ']' 00:20:08.687 19:29:06 -- common/autotest_common.sh@940 -- # kill -0 1224246 00:20:08.687 19:29:06 -- common/autotest_common.sh@941 -- # uname 00:20:08.687 19:29:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.687 19:29:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1224246 00:20:08.687 19:29:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:08.687 19:29:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:08.687 19:29:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1224246' 00:20:08.687 killing process with pid 1224246 00:20:08.687 19:29:06 -- common/autotest_common.sh@955 -- # kill 1224246 00:20:08.687 19:29:06 -- common/autotest_common.sh@960 -- # wait 1224246 00:20:08.945 19:29:07 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:08.945 19:29:07 -- target/tls.sh@49 -- # local key hash crc 00:20:08.945 19:29:07 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:08.945 19:29:07 -- target/tls.sh@51 -- # hash=02 00:20:08.945 19:29:07 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:08.945 19:29:07 -- target/tls.sh@52 -- # gzip -1 -c 00:20:08.945 19:29:07 -- target/tls.sh@52 -- # tail -c8 00:20:08.945 19:29:07 -- target/tls.sh@52 -- # head -c 4 00:20:08.945 19:29:07 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:08.945 19:29:07 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:08.945 19:29:07 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:08.945 19:29:07 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:08.945 19:29:07 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:08.945 19:29:07 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:08.945 19:29:07 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:08.945 19:29:07 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:08.945 19:29:07 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:08.945 19:29:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:08.945 19:29:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.945 19:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:08.945 19:29:07 -- nvmf/common.sh@469 -- # nvmfpid=1228380 00:20:08.945 19:29:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.945 19:29:07 -- nvmf/common.sh@470 -- # waitforlisten 1228380 00:20:08.945 19:29:07 -- common/autotest_common.sh@829 -- # '[' -z 1228380 ']' 00:20:08.945 19:29:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.945 19:29:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.945 19:29:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.945 19:29:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.945 19:29:07 -- common/autotest_common.sh@10 -- # set +x 00:20:08.945 [2024-11-17 19:29:07.114260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:08.945 [2024-11-17 19:29:07.114362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.945 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.945 [2024-11-17 19:29:07.186402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.204 [2024-11-17 19:29:07.274454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:09.204 [2024-11-17 19:29:07.274633] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.204 [2024-11-17 19:29:07.274655] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.204 [2024-11-17 19:29:07.274671] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.204 [2024-11-17 19:29:07.274723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.137 19:29:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.137 19:29:08 -- common/autotest_common.sh@862 -- # return 0 00:20:10.137 19:29:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:10.137 19:29:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.137 19:29:08 -- common/autotest_common.sh@10 -- # set +x 00:20:10.137 19:29:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.138 19:29:08 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:10.138 19:29:08 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:10.138 19:29:08 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:10.138 [2024-11-17 19:29:08.319984] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.138 19:29:08 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:10.395 19:29:08 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:10.654 [2024-11-17 19:29:08.825361] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.654 [2024-11-17 19:29:08.825654] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.654 19:29:08 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.913 malloc0 00:20:10.913 19:29:09 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:11.171 19:29:09 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.429 19:29:09 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.429 19:29:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:11.429 19:29:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:11.429 19:29:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:11.429 19:29:09 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:11.429 19:29:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.429 19:29:09 -- target/tls.sh@28 -- # bdevperf_pid=1228742 00:20:11.429 19:29:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.429 19:29:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.429 19:29:09 -- target/tls.sh@31 -- # waitforlisten 1228742 /var/tmp/bdevperf.sock 00:20:11.429 19:29:09 -- common/autotest_common.sh@829 -- # '[' -z 1228742 ']' 00:20:11.429 19:29:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.429 19:29:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.429 19:29:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.429 19:29:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.429 19:29:09 -- common/autotest_common.sh@10 -- # set +x 00:20:11.429 [2024-11-17 19:29:09.642878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:11.429 [2024-11-17 19:29:09.642955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228742 ] 00:20:11.429 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.688 [2024-11-17 19:29:09.700628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.688 [2024-11-17 19:29:09.781596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.622 19:29:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.622 19:29:10 -- common/autotest_common.sh@862 -- # return 0 00:20:12.622 19:29:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:12.622 [2024-11-17 19:29:10.841864] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.881 TLSTESTn1 00:20:12.881 19:29:10 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:12.881 Running I/O for 10 seconds... 00:20:22.855 00:20:22.855 Latency(us) 00:20:22.855 [2024-11-17T18:29:21.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.855 [2024-11-17T18:29:21.122Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:22.855 Verification LBA range: start 0x0 length 0x2000 00:20:22.855 TLSTESTn1 : 10.02 4313.29 16.85 0.00 0.00 29631.29 4271.98 48351.00 00:20:22.855 [2024-11-17T18:29:21.122Z] =================================================================================================================== 00:20:22.855 [2024-11-17T18:29:21.122Z] Total : 4313.29 16.85 0.00 0.00 29631.29 4271.98 48351.00 00:20:22.855 0 00:20:22.855 19:29:21 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.855 19:29:21 -- target/tls.sh@45 -- # killprocess 1228742 00:20:22.855 19:29:21 -- common/autotest_common.sh@936 -- # '[' -z 1228742 ']' 00:20:22.855 19:29:21 -- common/autotest_common.sh@940 -- # kill -0 1228742 00:20:22.855 19:29:21 -- common/autotest_common.sh@941 -- # uname 00:20:22.855 19:29:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.855 19:29:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1228742 00:20:23.114 19:29:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:23.114 19:29:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:23.114 19:29:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1228742' 00:20:23.114 killing process with pid 1228742 00:20:23.114 19:29:21 -- common/autotest_common.sh@955 -- # kill 1228742 00:20:23.114 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.114 00:20:23.114 Latency(us) 00:20:23.114 [2024-11-17T18:29:21.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.114 [2024-11-17T18:29:21.381Z] =================================================================================================================== 00:20:23.114 [2024-11-17T18:29:21.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.114 19:29:21 -- common/autotest_common.sh@960 -- # wait 1228742 00:20:23.114 19:29:21 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.114 19:29:21 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.114 19:29:21 -- common/autotest_common.sh@650 -- # local es=0 00:20:23.114 19:29:21 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.114 19:29:21 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:23.114 19:29:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.114 19:29:21 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:23.114 19:29:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.114 19:29:21 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.114 19:29:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.114 19:29:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.114 19:29:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.114 19:29:21 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:23.114 19:29:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.114 19:29:21 -- target/tls.sh@28 -- # bdevperf_pid=1230117 00:20:23.114 19:29:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.114 19:29:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.114 19:29:21 -- target/tls.sh@31 -- # waitforlisten 1230117 /var/tmp/bdevperf.sock 00:20:23.114 19:29:21 -- common/autotest_common.sh@829 -- # '[' -z 1230117 ']' 00:20:23.114 19:29:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.114 19:29:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.114 19:29:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.114 19:29:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.114 19:29:21 -- common/autotest_common.sh@10 -- # set +x 00:20:23.372 [2024-11-17 19:29:21.393352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:23.372 [2024-11-17 19:29:21.393432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230117 ] 00:20:23.372 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.372 [2024-11-17 19:29:21.467897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.372 [2024-11-17 19:29:21.565919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.631 19:29:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.631 19:29:21 -- common/autotest_common.sh@862 -- # return 0 00:20:23.631 19:29:21 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.889 [2024-11-17 19:29:21.992873] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.889 [2024-11-17 19:29:21.992933] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:23.889 request: 00:20:23.889 { 00:20:23.889 "name": "TLSTEST", 00:20:23.889 "trtype": "tcp", 00:20:23.889 "traddr": "10.0.0.2", 00:20:23.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.889 "adrfam": "ipv4", 00:20:23.889 "trsvcid": "4420", 00:20:23.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.889 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:23.889 "method": "bdev_nvme_attach_controller", 00:20:23.889 "req_id": 1 00:20:23.889 } 00:20:23.889 Got JSON-RPC error response 00:20:23.889 response: 00:20:23.889 { 00:20:23.889 "code": -22, 00:20:23.889 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:23.889 } 00:20:23.889 19:29:22 -- target/tls.sh@36 -- # killprocess 1230117 00:20:23.889 19:29:22 -- common/autotest_common.sh@936 -- # '[' -z 1230117 ']' 00:20:23.889 19:29:22 -- common/autotest_common.sh@940 -- # kill -0 1230117 00:20:23.889 19:29:22 -- common/autotest_common.sh@941 -- # uname 00:20:23.889 19:29:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:23.889 19:29:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1230117 00:20:23.889 19:29:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:23.889 19:29:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:23.889 19:29:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1230117' 00:20:23.889 killing process with pid 1230117 00:20:23.889 19:29:22 -- common/autotest_common.sh@955 -- # kill 1230117 00:20:23.889 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.889 00:20:23.889 Latency(us) 00:20:23.889 [2024-11-17T18:29:22.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.889 [2024-11-17T18:29:22.156Z] =================================================================================================================== 00:20:23.889 [2024-11-17T18:29:22.156Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:23.889 19:29:22 -- common/autotest_common.sh@960 -- # wait 1230117 00:20:24.147 19:29:22 -- target/tls.sh@37 -- # return 1 00:20:24.147 19:29:22 -- common/autotest_common.sh@653 -- # es=1 00:20:24.147 19:29:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.147 19:29:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.147 19:29:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.147 19:29:22 -- target/tls.sh@183 -- # killprocess 1228380 00:20:24.147 19:29:22 -- common/autotest_common.sh@936 -- # '[' -z 1228380 ']' 00:20:24.147 19:29:22 -- common/autotest_common.sh@940 -- # kill -0 1228380 00:20:24.147 19:29:22 -- common/autotest_common.sh@941 -- # uname 00:20:24.147 19:29:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:24.147 19:29:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1228380 00:20:24.147 19:29:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:24.147 19:29:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:24.147 19:29:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1228380' 00:20:24.147 killing process with pid 1228380 00:20:24.147 19:29:22 -- common/autotest_common.sh@955 -- # kill 1228380 00:20:24.147 19:29:22 -- common/autotest_common.sh@960 -- # wait 1228380 00:20:24.406 19:29:22 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:24.406 19:29:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:24.406 19:29:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.406 19:29:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.406 19:29:22 -- nvmf/common.sh@469 -- # nvmfpid=1230267 00:20:24.406 19:29:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:24.406 19:29:22 -- nvmf/common.sh@470 -- # waitforlisten 1230267 00:20:24.406 19:29:22 -- common/autotest_common.sh@829 -- # '[' -z 1230267 ']' 00:20:24.406 19:29:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.406 19:29:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.406 19:29:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.406 19:29:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.406 19:29:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.406 [2024-11-17 19:29:22.574236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:24.406 [2024-11-17 19:29:22.574329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.406 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.406 [2024-11-17 19:29:22.641687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.664 [2024-11-17 19:29:22.738971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:24.664 [2024-11-17 19:29:22.739156] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.665 [2024-11-17 19:29:22.739177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.665 [2024-11-17 19:29:22.739192] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.665 [2024-11-17 19:29:22.739235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.599 19:29:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.599 19:29:23 -- common/autotest_common.sh@862 -- # return 0 00:20:25.599 19:29:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:25.599 19:29:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.599 19:29:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.599 19:29:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.599 19:29:23 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.599 19:29:23 -- common/autotest_common.sh@650 -- # local es=0 00:20:25.599 19:29:23 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.599 19:29:23 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:25.599 19:29:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.599 19:29:23 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:25.599 19:29:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.599 19:29:23 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.599 19:29:23 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.599 19:29:23 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:25.599 [2024-11-17 19:29:23.831732] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.599 19:29:23 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:25.857 19:29:24 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:26.115 [2024-11-17 19:29:24.337107] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.115 [2024-11-17 19:29:24.337360] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.115 19:29:24 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:26.373 malloc0 00:20:26.373 19:29:24 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:26.630 19:29:24 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:26.887 [2024-11-17 19:29:25.095305] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:26.887 [2024-11-17 19:29:25.095343] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:26.887 [2024-11-17 19:29:25.095378] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:26.887 request: 00:20:26.887 { 00:20:26.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.887 "host": "nqn.2016-06.io.spdk:host1", 00:20:26.888 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:26.888 "method": "nvmf_subsystem_add_host", 00:20:26.888 "req_id": 1 00:20:26.888 } 00:20:26.888 Got JSON-RPC error response 00:20:26.888 response: 00:20:26.888 { 00:20:26.888 "code": -32603, 00:20:26.888 "message": "Internal error" 00:20:26.888 } 00:20:26.888 19:29:25 -- common/autotest_common.sh@653 -- # es=1 00:20:26.888 19:29:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:26.888 19:29:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:26.888 19:29:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:26.888 19:29:25 -- target/tls.sh@189 -- # killprocess 1230267 00:20:26.888 19:29:25 -- common/autotest_common.sh@936 -- # '[' -z 1230267 ']' 00:20:26.888 19:29:25 -- common/autotest_common.sh@940 -- # kill -0 1230267 00:20:26.888 19:29:25 -- common/autotest_common.sh@941 -- # uname 00:20:26.888 19:29:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.888 19:29:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1230267 00:20:26.888 19:29:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:26.888 19:29:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:26.888 19:29:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1230267' 00:20:26.888 killing process with pid 1230267 00:20:26.888 19:29:25 -- common/autotest_common.sh@955 -- # kill 1230267 00:20:26.888 19:29:25 -- common/autotest_common.sh@960 -- # wait 1230267 00:20:27.146 19:29:25 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.146 19:29:25 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:27.146 19:29:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:27.146 19:29:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:27.146 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:20:27.146 19:29:25 -- nvmf/common.sh@469 -- # nvmfpid=1230702 00:20:27.146 19:29:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:27.146 19:29:25 -- nvmf/common.sh@470 -- # waitforlisten 1230702 00:20:27.146 19:29:25 -- common/autotest_common.sh@829 -- # '[' -z 1230702 ']' 00:20:27.146 19:29:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.146 19:29:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.146 19:29:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.146 19:29:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.146 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:20:27.407 [2024-11-17 19:29:25.434430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:27.407 [2024-11-17 19:29:25.434503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.407 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.407 [2024-11-17 19:29:25.498190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.407 [2024-11-17 19:29:25.586684] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:27.407 [2024-11-17 19:29:25.586842] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.407 [2024-11-17 19:29:25.586860] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.407 [2024-11-17 19:29:25.586873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.407 [2024-11-17 19:29:25.586902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.340 19:29:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.340 19:29:26 -- common/autotest_common.sh@862 -- # return 0 00:20:28.340 19:29:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:28.340 19:29:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.340 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:20:28.340 19:29:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.340 19:29:26 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.340 19:29:26 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.341 19:29:26 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.598 [2024-11-17 19:29:26.657076] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.598 19:29:26 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:28.857 19:29:26 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:29.115 [2024-11-17 19:29:27.154405] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.115 [2024-11-17 19:29:27.154636] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.115 19:29:27 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:29.373 malloc0 00:20:29.373 19:29:27 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:29.631 19:29:27 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:29.889 19:29:27 -- target/tls.sh@197 -- # bdevperf_pid=1231000 00:20:29.889 19:29:27 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:29.889 19:29:27 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.890 19:29:27 -- target/tls.sh@200 -- # waitforlisten 1231000 /var/tmp/bdevperf.sock 00:20:29.890 19:29:27 -- common/autotest_common.sh@829 -- # '[' -z 1231000 ']' 00:20:29.890 19:29:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.890 19:29:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.890 19:29:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.890 19:29:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.890 19:29:27 -- common/autotest_common.sh@10 -- # set +x 00:20:29.890 [2024-11-17 19:29:27.963469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:29.890 [2024-11-17 19:29:27.963530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231000 ] 00:20:29.890 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.890 [2024-11-17 19:29:28.019195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.890 [2024-11-17 19:29:28.100309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.824 19:29:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.824 19:29:28 -- common/autotest_common.sh@862 -- # return 0 00:20:30.824 19:29:28 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:31.093 [2024-11-17 19:29:29.146324] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.093 TLSTESTn1 00:20:31.093 19:29:29 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:31.360 19:29:29 -- target/tls.sh@205 -- # tgtconf='{ 00:20:31.360 "subsystems": [ 00:20:31.360 { 00:20:31.360 "subsystem": "iobuf", 00:20:31.360 "config": [ 00:20:31.360 { 00:20:31.360 "method": "iobuf_set_options", 00:20:31.360 "params": { 00:20:31.360 "small_pool_count": 8192, 00:20:31.360 "large_pool_count": 1024, 00:20:31.360 "small_bufsize": 8192, 00:20:31.360 "large_bufsize": 135168 00:20:31.360 } 00:20:31.360 } 00:20:31.360 ] 00:20:31.360 }, 00:20:31.360 { 00:20:31.360 "subsystem": "sock", 00:20:31.360 "config": [ 00:20:31.360 { 00:20:31.360 "method": "sock_impl_set_options", 00:20:31.360 "params": { 00:20:31.360 "impl_name": "posix", 00:20:31.360 "recv_buf_size": 2097152, 00:20:31.360 "send_buf_size": 2097152, 00:20:31.360 "enable_recv_pipe": true, 00:20:31.360 "enable_quickack": false, 00:20:31.360 "enable_placement_id": 0, 00:20:31.360 "enable_zerocopy_send_server": true, 00:20:31.360 "enable_zerocopy_send_client": false, 00:20:31.360 "zerocopy_threshold": 0, 00:20:31.360 "tls_version": 0, 00:20:31.360 "enable_ktls": false 00:20:31.360 } 00:20:31.360 }, 00:20:31.360 { 00:20:31.360 "method": "sock_impl_set_options", 00:20:31.360 "params": { 00:20:31.360 "impl_name": "ssl", 00:20:31.360 "recv_buf_size": 4096, 00:20:31.360 "send_buf_size": 4096, 00:20:31.360 "enable_recv_pipe": true, 00:20:31.360 "enable_quickack": false, 00:20:31.360 "enable_placement_id": 0, 00:20:31.360 "enable_zerocopy_send_server": true, 00:20:31.360 "enable_zerocopy_send_client": false, 00:20:31.360 "zerocopy_threshold": 0, 00:20:31.360 "tls_version": 0, 00:20:31.360 "enable_ktls": false 00:20:31.360 } 00:20:31.360 } 00:20:31.360 ] 00:20:31.360 }, 00:20:31.360 { 00:20:31.360 "subsystem": "vmd", 00:20:31.360 "config": [] 00:20:31.360 }, 00:20:31.360 { 00:20:31.360 "subsystem": "accel", 00:20:31.360 "config": [ 00:20:31.360 { 00:20:31.360 "method": "accel_set_options", 00:20:31.360 "params": { 00:20:31.360 "small_cache_size": 128, 00:20:31.360 "large_cache_size": 16, 00:20:31.360 "task_count": 2048, 00:20:31.360 "sequence_count": 2048, 00:20:31.360 "buf_count": 2048 00:20:31.360 } 00:20:31.360 } 00:20:31.360 ] 00:20:31.360 }, 00:20:31.360 { 00:20:31.360 "subsystem": "bdev", 00:20:31.360 "config": [ 00:20:31.360 { 00:20:31.360 "method": "bdev_set_options", 00:20:31.360 "params": { 00:20:31.360 "bdev_io_pool_size": 65535, 00:20:31.360 "bdev_io_cache_size": 256, 00:20:31.360 "bdev_auto_examine": true, 00:20:31.360 "iobuf_small_cache_size": 128, 00:20:31.360 "iobuf_large_cache_size": 16 00:20:31.360 } 00:20:31.360 }, 00:20:31.360 { 00:20:31.360 "method": "bdev_raid_set_options", 00:20:31.360 "params": { 00:20:31.360 "process_window_size_kb": 1024 00:20:31.360 } 00:20:31.360 }, 00:20:31.360 { 00:20:31.360 "method": "bdev_iscsi_set_options", 00:20:31.360 "params": { 00:20:31.360 "timeout_sec": 30 00:20:31.360 } 00:20:31.360 }, 00:20:31.360 { 00:20:31.360 "method": "bdev_nvme_set_options", 00:20:31.360 "params": { 00:20:31.360 "action_on_timeout": "none", 00:20:31.360 "timeout_us": 0, 00:20:31.360 "timeout_admin_us": 0, 00:20:31.360 "keep_alive_timeout_ms": 10000, 00:20:31.360 "transport_retry_count": 4, 00:20:31.360 "arbitration_burst": 0, 00:20:31.360 "low_priority_weight": 0, 00:20:31.360 "medium_priority_weight": 0, 00:20:31.360 "high_priority_weight": 0, 00:20:31.360 "nvme_adminq_poll_period_us": 10000, 00:20:31.360 "nvme_ioq_poll_period_us": 0, 00:20:31.360 "io_queue_requests": 0, 00:20:31.360 "delay_cmd_submit": true, 00:20:31.360 "bdev_retry_count": 3, 00:20:31.360 "transport_ack_timeout": 0, 00:20:31.360 "ctrlr_loss_timeout_sec": 0, 00:20:31.360 "reconnect_delay_sec": 0, 00:20:31.360 "fast_io_fail_timeout_sec": 0, 00:20:31.360 "generate_uuids": false, 00:20:31.360 "transport_tos": 0, 00:20:31.360 "io_path_stat": false, 00:20:31.360 "allow_accel_sequence": false 00:20:31.360 } 00:20:31.360 }, 00:20:31.360 { 00:20:31.360 "method": "bdev_nvme_set_hotplug", 00:20:31.360 "params": { 00:20:31.360 "period_us": 100000, 00:20:31.361 "enable": false 00:20:31.361 } 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "method": "bdev_malloc_create", 00:20:31.361 "params": { 00:20:31.361 "name": "malloc0", 00:20:31.361 "num_blocks": 8192, 00:20:31.361 "block_size": 4096, 00:20:31.361 "physical_block_size": 4096, 00:20:31.361 "uuid": "b0b43be4-2a00-4319-9f57-8cd8e37ce267", 00:20:31.361 "optimal_io_boundary": 0 00:20:31.361 } 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "method": "bdev_wait_for_examine" 00:20:31.361 } 00:20:31.361 ] 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "subsystem": "nbd", 00:20:31.361 "config": [] 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "subsystem": "scheduler", 00:20:31.361 "config": [ 00:20:31.361 { 00:20:31.361 "method": "framework_set_scheduler", 00:20:31.361 "params": { 00:20:31.361 "name": "static" 00:20:31.361 } 00:20:31.361 } 00:20:31.361 ] 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "subsystem": "nvmf", 00:20:31.361 "config": [ 00:20:31.361 { 00:20:31.361 "method": "nvmf_set_config", 00:20:31.361 "params": { 00:20:31.361 "discovery_filter": "match_any", 00:20:31.361 "admin_cmd_passthru": { 00:20:31.361 "identify_ctrlr": false 00:20:31.361 } 00:20:31.361 } 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "method": "nvmf_set_max_subsystems", 00:20:31.361 "params": { 00:20:31.361 "max_subsystems": 1024 00:20:31.361 } 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "method": "nvmf_set_crdt", 00:20:31.361 "params": { 00:20:31.361 "crdt1": 0, 00:20:31.361 "crdt2": 0, 00:20:31.361 "crdt3": 0 00:20:31.361 } 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "method": "nvmf_create_transport", 00:20:31.361 "params": { 00:20:31.361 "trtype": "TCP", 00:20:31.361 "max_queue_depth": 128, 00:20:31.361 "max_io_qpairs_per_ctrlr": 127, 00:20:31.361 "in_capsule_data_size": 4096, 00:20:31.361 "max_io_size": 131072, 00:20:31.361 "io_unit_size": 131072, 00:20:31.361 "max_aq_depth": 128, 00:20:31.361 "num_shared_buffers": 511, 00:20:31.361 "buf_cache_size": 4294967295, 00:20:31.361 "dif_insert_or_strip": false, 00:20:31.361 "zcopy": false, 00:20:31.361 "c2h_success": false, 00:20:31.361 "sock_priority": 0, 00:20:31.361 "abort_timeout_sec": 1 00:20:31.361 } 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "method": "nvmf_create_subsystem", 00:20:31.361 "params": { 00:20:31.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.361 "allow_any_host": false, 00:20:31.361 "serial_number": "SPDK00000000000001", 00:20:31.361 "model_number": "SPDK bdev Controller", 00:20:31.361 "max_namespaces": 10, 00:20:31.361 "min_cntlid": 1, 00:20:31.361 "max_cntlid": 65519, 00:20:31.361 "ana_reporting": false 00:20:31.361 } 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "method": "nvmf_subsystem_add_host", 00:20:31.361 "params": { 00:20:31.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.361 "host": "nqn.2016-06.io.spdk:host1", 00:20:31.361 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:31.361 } 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "method": "nvmf_subsystem_add_ns", 00:20:31.361 "params": { 00:20:31.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.361 "namespace": { 00:20:31.361 "nsid": 1, 00:20:31.361 "bdev_name": "malloc0", 00:20:31.361 "nguid": "B0B43BE42A0043199F578CD8E37CE267", 00:20:31.361 "uuid": "b0b43be4-2a00-4319-9f57-8cd8e37ce267" 00:20:31.361 } 00:20:31.361 } 00:20:31.361 }, 00:20:31.361 { 00:20:31.361 "method": "nvmf_subsystem_add_listener", 00:20:31.361 "params": { 00:20:31.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.361 "listen_address": { 00:20:31.361 "trtype": "TCP", 00:20:31.361 "adrfam": "IPv4", 00:20:31.361 "traddr": "10.0.0.2", 00:20:31.361 "trsvcid": "4420" 00:20:31.361 }, 00:20:31.361 "secure_channel": true 00:20:31.361 } 00:20:31.361 } 00:20:31.361 ] 00:20:31.361 } 00:20:31.361 ] 00:20:31.361 }' 00:20:31.361 19:29:29 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:31.620 19:29:29 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:31.620 "subsystems": [ 00:20:31.620 { 00:20:31.620 "subsystem": "iobuf", 00:20:31.620 "config": [ 00:20:31.620 { 00:20:31.620 "method": "iobuf_set_options", 00:20:31.620 "params": { 00:20:31.620 "small_pool_count": 8192, 00:20:31.620 "large_pool_count": 1024, 00:20:31.620 "small_bufsize": 8192, 00:20:31.620 "large_bufsize": 135168 00:20:31.620 } 00:20:31.620 } 00:20:31.620 ] 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "subsystem": "sock", 00:20:31.620 "config": [ 00:20:31.620 { 00:20:31.620 "method": "sock_impl_set_options", 00:20:31.620 "params": { 00:20:31.620 "impl_name": "posix", 00:20:31.620 "recv_buf_size": 2097152, 00:20:31.620 "send_buf_size": 2097152, 00:20:31.620 "enable_recv_pipe": true, 00:20:31.620 "enable_quickack": false, 00:20:31.620 "enable_placement_id": 0, 00:20:31.620 "enable_zerocopy_send_server": true, 00:20:31.620 "enable_zerocopy_send_client": false, 00:20:31.620 "zerocopy_threshold": 0, 00:20:31.620 "tls_version": 0, 00:20:31.620 "enable_ktls": false 00:20:31.620 } 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "method": "sock_impl_set_options", 00:20:31.620 "params": { 00:20:31.620 "impl_name": "ssl", 00:20:31.620 "recv_buf_size": 4096, 00:20:31.620 "send_buf_size": 4096, 00:20:31.620 "enable_recv_pipe": true, 00:20:31.620 "enable_quickack": false, 00:20:31.620 "enable_placement_id": 0, 00:20:31.620 "enable_zerocopy_send_server": true, 00:20:31.620 "enable_zerocopy_send_client": false, 00:20:31.620 "zerocopy_threshold": 0, 00:20:31.620 "tls_version": 0, 00:20:31.620 "enable_ktls": false 00:20:31.620 } 00:20:31.620 } 00:20:31.620 ] 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "subsystem": "vmd", 00:20:31.620 "config": [] 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "subsystem": "accel", 00:20:31.620 "config": [ 00:20:31.620 { 00:20:31.620 "method": "accel_set_options", 00:20:31.620 "params": { 00:20:31.620 "small_cache_size": 128, 00:20:31.620 "large_cache_size": 16, 00:20:31.620 "task_count": 2048, 00:20:31.620 "sequence_count": 2048, 00:20:31.620 "buf_count": 2048 00:20:31.620 } 00:20:31.620 } 00:20:31.620 ] 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "subsystem": "bdev", 00:20:31.620 "config": [ 00:20:31.620 { 00:20:31.620 "method": "bdev_set_options", 00:20:31.620 "params": { 00:20:31.620 "bdev_io_pool_size": 65535, 00:20:31.620 "bdev_io_cache_size": 256, 00:20:31.620 "bdev_auto_examine": true, 00:20:31.620 "iobuf_small_cache_size": 128, 00:20:31.620 "iobuf_large_cache_size": 16 00:20:31.620 } 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "method": "bdev_raid_set_options", 00:20:31.620 "params": { 00:20:31.620 "process_window_size_kb": 1024 00:20:31.620 } 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "method": "bdev_iscsi_set_options", 00:20:31.620 "params": { 00:20:31.620 "timeout_sec": 30 00:20:31.620 } 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "method": "bdev_nvme_set_options", 00:20:31.620 "params": { 00:20:31.620 "action_on_timeout": "none", 00:20:31.620 "timeout_us": 0, 00:20:31.620 "timeout_admin_us": 0, 00:20:31.620 "keep_alive_timeout_ms": 10000, 00:20:31.620 "transport_retry_count": 4, 00:20:31.620 "arbitration_burst": 0, 00:20:31.620 "low_priority_weight": 0, 00:20:31.620 "medium_priority_weight": 0, 00:20:31.620 "high_priority_weight": 0, 00:20:31.620 "nvme_adminq_poll_period_us": 10000, 00:20:31.620 "nvme_ioq_poll_period_us": 0, 00:20:31.620 "io_queue_requests": 512, 00:20:31.620 "delay_cmd_submit": true, 00:20:31.620 "bdev_retry_count": 3, 00:20:31.620 "transport_ack_timeout": 0, 00:20:31.620 "ctrlr_loss_timeout_sec": 0, 00:20:31.620 "reconnect_delay_sec": 0, 00:20:31.620 "fast_io_fail_timeout_sec": 0, 00:20:31.620 "generate_uuids": false, 00:20:31.620 "transport_tos": 0, 00:20:31.620 "io_path_stat": false, 00:20:31.620 "allow_accel_sequence": false 00:20:31.620 } 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "method": "bdev_nvme_attach_controller", 00:20:31.620 "params": { 00:20:31.620 "name": "TLSTEST", 00:20:31.620 "trtype": "TCP", 00:20:31.620 "adrfam": "IPv4", 00:20:31.620 "traddr": "10.0.0.2", 00:20:31.620 "trsvcid": "4420", 00:20:31.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.620 "prchk_reftag": false, 00:20:31.620 "prchk_guard": false, 00:20:31.620 "ctrlr_loss_timeout_sec": 0, 00:20:31.620 "reconnect_delay_sec": 0, 00:20:31.620 "fast_io_fail_timeout_sec": 0, 00:20:31.620 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:31.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.620 "hdgst": false, 00:20:31.620 "ddgst": false 00:20:31.620 } 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "method": "bdev_nvme_set_hotplug", 00:20:31.620 "params": { 00:20:31.620 "period_us": 100000, 00:20:31.620 "enable": false 00:20:31.620 } 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "method": "bdev_wait_for_examine" 00:20:31.620 } 00:20:31.620 ] 00:20:31.620 }, 00:20:31.620 { 00:20:31.620 "subsystem": "nbd", 00:20:31.620 "config": [] 00:20:31.620 } 00:20:31.620 ] 00:20:31.620 }' 00:20:31.620 19:29:29 -- target/tls.sh@208 -- # killprocess 1231000 00:20:31.620 19:29:29 -- common/autotest_common.sh@936 -- # '[' -z 1231000 ']' 00:20:31.620 19:29:29 -- common/autotest_common.sh@940 -- # kill -0 1231000 00:20:31.620 19:29:29 -- common/autotest_common.sh@941 -- # uname 00:20:31.879 19:29:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:31.879 19:29:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1231000 00:20:31.879 19:29:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:31.879 19:29:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:31.879 19:29:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1231000' 00:20:31.879 killing process with pid 1231000 00:20:31.879 19:29:29 -- common/autotest_common.sh@955 -- # kill 1231000 00:20:31.879 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.879 00:20:31.879 Latency(us) 00:20:31.879 [2024-11-17T18:29:30.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.879 [2024-11-17T18:29:30.146Z] =================================================================================================================== 00:20:31.879 [2024-11-17T18:29:30.146Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:31.879 19:29:29 -- common/autotest_common.sh@960 -- # wait 1231000 00:20:32.138 19:29:30 -- target/tls.sh@209 -- # killprocess 1230702 00:20:32.138 19:29:30 -- common/autotest_common.sh@936 -- # '[' -z 1230702 ']' 00:20:32.138 19:29:30 -- common/autotest_common.sh@940 -- # kill -0 1230702 00:20:32.138 19:29:30 -- common/autotest_common.sh@941 -- # uname 00:20:32.138 19:29:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.138 19:29:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1230702 00:20:32.138 19:29:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:32.138 19:29:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:32.138 19:29:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1230702' 00:20:32.138 killing process with pid 1230702 00:20:32.138 19:29:30 -- common/autotest_common.sh@955 -- # kill 1230702 00:20:32.138 19:29:30 -- common/autotest_common.sh@960 -- # wait 1230702 00:20:32.396 19:29:30 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:32.396 19:29:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:32.396 19:29:30 -- target/tls.sh@212 -- # echo '{ 00:20:32.396 "subsystems": [ 00:20:32.396 { 00:20:32.396 "subsystem": "iobuf", 00:20:32.396 "config": [ 00:20:32.396 { 00:20:32.396 "method": "iobuf_set_options", 00:20:32.396 "params": { 00:20:32.396 "small_pool_count": 8192, 00:20:32.396 "large_pool_count": 1024, 00:20:32.396 "small_bufsize": 8192, 00:20:32.396 "large_bufsize": 135168 00:20:32.396 } 00:20:32.396 } 00:20:32.396 ] 00:20:32.396 }, 00:20:32.396 { 00:20:32.396 "subsystem": "sock", 00:20:32.396 "config": [ 00:20:32.396 { 00:20:32.396 "method": "sock_impl_set_options", 00:20:32.396 "params": { 00:20:32.396 "impl_name": "posix", 00:20:32.396 "recv_buf_size": 2097152, 00:20:32.396 "send_buf_size": 2097152, 00:20:32.396 "enable_recv_pipe": true, 00:20:32.396 "enable_quickack": false, 00:20:32.396 "enable_placement_id": 0, 00:20:32.396 "enable_zerocopy_send_server": true, 00:20:32.396 "enable_zerocopy_send_client": false, 00:20:32.396 "zerocopy_threshold": 0, 00:20:32.396 "tls_version": 0, 00:20:32.396 "enable_ktls": false 00:20:32.396 } 00:20:32.396 }, 00:20:32.396 { 00:20:32.396 "method": "sock_impl_set_options", 00:20:32.396 "params": { 00:20:32.397 "impl_name": "ssl", 00:20:32.397 "recv_buf_size": 4096, 00:20:32.397 "send_buf_size": 4096, 00:20:32.397 "enable_recv_pipe": true, 00:20:32.397 "enable_quickack": false, 00:20:32.397 "enable_placement_id": 0, 00:20:32.397 "enable_zerocopy_send_server": true, 00:20:32.397 "enable_zerocopy_send_client": false, 00:20:32.397 "zerocopy_threshold": 0, 00:20:32.397 "tls_version": 0, 00:20:32.397 "enable_ktls": false 00:20:32.397 } 00:20:32.397 } 00:20:32.397 ] 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "subsystem": "vmd", 00:20:32.397 "config": [] 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "subsystem": "accel", 00:20:32.397 "config": [ 00:20:32.397 { 00:20:32.397 "method": "accel_set_options", 00:20:32.397 "params": { 00:20:32.397 "small_cache_size": 128, 00:20:32.397 "large_cache_size": 16, 00:20:32.397 "task_count": 2048, 00:20:32.397 "sequence_count": 2048, 00:20:32.397 "buf_count": 2048 00:20:32.397 } 00:20:32.397 } 00:20:32.397 ] 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "subsystem": "bdev", 00:20:32.397 "config": [ 00:20:32.397 { 00:20:32.397 "method": "bdev_set_options", 00:20:32.397 "params": { 00:20:32.397 "bdev_io_pool_size": 65535, 00:20:32.397 "bdev_io_cache_size": 256, 00:20:32.397 "bdev_auto_examine": true, 00:20:32.397 "iobuf_small_cache_size": 128, 00:20:32.397 "iobuf_large_cache_size": 16 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "bdev_raid_set_options", 00:20:32.397 "params": { 00:20:32.397 "process_window_size_kb": 1024 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "bdev_iscsi_set_options", 00:20:32.397 "params": { 00:20:32.397 "timeout_sec": 30 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "bdev_nvme_set_options", 00:20:32.397 "params": { 00:20:32.397 "action_on_timeout": "none", 00:20:32.397 "timeout_us": 0, 00:20:32.397 "timeout_admin_us": 0, 00:20:32.397 "keep_alive_timeout_ms": 10000, 00:20:32.397 "transport_retry_count": 4, 00:20:32.397 "arbitration_burst": 0, 00:20:32.397 "low_priority_weight": 0, 00:20:32.397 "medium_priority_weight": 0, 00:20:32.397 "high_priority_weight": 0, 00:20:32.397 "nvme_adminq_poll_period_us": 10000, 00:20:32.397 "nvme_ioq_poll_period_us": 0, 00:20:32.397 "io_queue_requests": 0, 00:20:32.397 "delay_cmd_submit": true, 00:20:32.397 "bdev_retry_count": 3, 00:20:32.397 "transport_ack_timeout": 0, 00:20:32.397 "ctrlr_loss_timeout_sec": 0, 00:20:32.397 "reconnect_delay_sec": 0, 00:20:32.397 "fast_io_fail_timeout_sec": 0, 00:20:32.397 "generate_uuids": false, 00:20:32.397 "transport_tos": 0, 00:20:32.397 "io_path_stat": false, 00:20:32.397 "allow_accel_sequence": false 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "bdev_nvme_set_hotplug", 00:20:32.397 "params": { 00:20:32.397 "period_us": 100000, 00:20:32.397 "enable": false 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "bdev_malloc_create", 00:20:32.397 "params": { 00:20:32.397 "name": "malloc0", 00:20:32.397 "num_blocks": 8192, 00:20:32.397 "block_size": 4096, 00:20:32.397 "physical_block_size": 4096, 00:20:32.397 "uuid": "b0b43be4-2a00-4319-9f57-8cd8e37ce267", 00:20:32.397 "optimal_io_boundary": 0 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "bdev_wait_for_examine" 00:20:32.397 } 00:20:32.397 ] 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "subsystem": "nbd", 00:20:32.397 "config": [] 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "subsystem": "scheduler", 00:20:32.397 "config": [ 00:20:32.397 { 00:20:32.397 "method": "framework_set_scheduler", 00:20:32.397 "params": { 00:20:32.397 "name": "static" 00:20:32.397 } 00:20:32.397 } 00:20:32.397 ] 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "subsystem": "nvmf", 00:20:32.397 "config": [ 00:20:32.397 { 00:20:32.397 "method": "nvmf_set_config", 00:20:32.397 "params": { 00:20:32.397 "discovery_filter": "match_any", 00:20:32.397 "admin_cmd_passthru": { 00:20:32.397 "identify_ctrlr": false 00:20:32.397 } 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "nvmf_set_max_subsystems", 00:20:32.397 "params": { 00:20:32.397 "max_subsystems": 1024 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "nvmf_set_crdt", 00:20:32.397 "params": { 00:20:32.397 "crdt1": 0, 00:20:32.397 "crdt2": 0, 00:20:32.397 "crdt3": 0 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "nvmf_create_transport", 00:20:32.397 "params": { 00:20:32.397 "trtype": "TCP", 00:20:32.397 "max_queue_depth": 128, 00:20:32.397 "max_io_qpairs_per_ctrlr": 127, 00:20:32.397 "in_capsule_data_size": 4096, 00:20:32.397 "max_io_size": 131072, 00:20:32.397 "io_unit_size": 131072, 00:20:32.397 "max_aq_depth": 128, 00:20:32.397 "num_shared_buffers": 511, 00:20:32.397 "buf_cache_size": 4294967295, 00:20:32.397 "dif_insert_or_strip": false, 00:20:32.397 "zcopy": false, 00:20:32.397 "c2h_success": false, 00:20:32.397 "sock_priority": 0, 00:20:32.397 "abort_timeout_sec": 1 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "nvmf_create_subsystem", 00:20:32.397 "params": { 00:20:32.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.397 "allow_any_host": false, 00:20:32.397 "serial_number": "SPDK00000000000001", 00:20:32.397 "model_number": "SPDK bdev Controller", 00:20:32.397 "max_namespaces": 10, 00:20:32.397 "min_cntlid": 1, 00:20:32.397 "max_cntlid": 65519, 00:20:32.397 "ana_reporting": false 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "nvmf_subsystem_add_host", 00:20:32.397 "params": { 00:20:32.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.397 "host": "nqn.2016-06.io.spdk:host1", 00:20:32.397 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "nvmf_subsystem_add_ns", 00:20:32.397 "params": { 00:20:32.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.397 "namespace": { 00:20:32.397 "nsid": 1, 00:20:32.397 "bdev_name": "malloc0", 00:20:32.397 "nguid": "B0B43BE42A0043199F578CD8E37CE267", 00:20:32.397 "uuid": "b0b43be4-2a00-4319-9f57-8cd8e37ce267" 00:20:32.397 } 00:20:32.397 } 00:20:32.397 }, 00:20:32.397 { 00:20:32.397 "method": "nvmf_subsystem_add_listener", 00:20:32.397 "params": { 00:20:32.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.397 "listen_address": { 00:20:32.397 "trtype": "TCP", 00:20:32.397 "adrfam": "IPv4", 00:20:32.397 "traddr": "10.0.0.2", 00:20:32.397 "trsvcid": "4420" 00:20:32.397 }, 00:20:32.397 "secure_channel": true 00:20:32.397 } 00:20:32.397 } 00:20:32.397 ] 00:20:32.397 } 00:20:32.397 ] 00:20:32.397 }' 00:20:32.397 19:29:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:32.397 19:29:30 -- common/autotest_common.sh@10 -- # set +x 00:20:32.397 19:29:30 -- nvmf/common.sh@469 -- # nvmfpid=1231295 00:20:32.397 19:29:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:32.397 19:29:30 -- nvmf/common.sh@470 -- # waitforlisten 1231295 00:20:32.397 19:29:30 -- common/autotest_common.sh@829 -- # '[' -z 1231295 ']' 00:20:32.397 19:29:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.397 19:29:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.397 19:29:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.398 19:29:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.398 19:29:30 -- common/autotest_common.sh@10 -- # set +x 00:20:32.398 [2024-11-17 19:29:30.502016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:32.398 [2024-11-17 19:29:30.502111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.398 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.398 [2024-11-17 19:29:30.570059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.398 [2024-11-17 19:29:30.657638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:32.398 [2024-11-17 19:29:30.657829] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.398 [2024-11-17 19:29:30.657850] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.398 [2024-11-17 19:29:30.657875] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.398 [2024-11-17 19:29:30.657907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.656 [2024-11-17 19:29:30.886874] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.656 [2024-11-17 19:29:30.918896] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:32.656 [2024-11-17 19:29:30.919173] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.222 19:29:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.222 19:29:31 -- common/autotest_common.sh@862 -- # return 0 00:20:33.222 19:29:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:33.222 19:29:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.222 19:29:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.222 19:29:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.222 19:29:31 -- target/tls.sh@216 -- # bdevperf_pid=1231450 00:20:33.222 19:29:31 -- target/tls.sh@217 -- # waitforlisten 1231450 /var/tmp/bdevperf.sock 00:20:33.222 19:29:31 -- common/autotest_common.sh@829 -- # '[' -z 1231450 ']' 00:20:33.222 19:29:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.222 19:29:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.222 19:29:31 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:33.222 19:29:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.222 19:29:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.222 19:29:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.222 19:29:31 -- target/tls.sh@213 -- # echo '{ 00:20:33.222 "subsystems": [ 00:20:33.222 { 00:20:33.222 "subsystem": "iobuf", 00:20:33.222 "config": [ 00:20:33.222 { 00:20:33.222 "method": "iobuf_set_options", 00:20:33.222 "params": { 00:20:33.222 "small_pool_count": 8192, 00:20:33.222 "large_pool_count": 1024, 00:20:33.222 "small_bufsize": 8192, 00:20:33.222 "large_bufsize": 135168 00:20:33.222 } 00:20:33.222 } 00:20:33.222 ] 00:20:33.222 }, 00:20:33.222 { 00:20:33.222 "subsystem": "sock", 00:20:33.222 "config": [ 00:20:33.222 { 00:20:33.222 "method": "sock_impl_set_options", 00:20:33.222 "params": { 00:20:33.222 "impl_name": "posix", 00:20:33.222 "recv_buf_size": 2097152, 00:20:33.222 "send_buf_size": 2097152, 00:20:33.222 "enable_recv_pipe": true, 00:20:33.222 "enable_quickack": false, 00:20:33.222 "enable_placement_id": 0, 00:20:33.222 "enable_zerocopy_send_server": true, 00:20:33.222 "enable_zerocopy_send_client": false, 00:20:33.222 "zerocopy_threshold": 0, 00:20:33.222 "tls_version": 0, 00:20:33.222 "enable_ktls": false 00:20:33.222 } 00:20:33.222 }, 00:20:33.222 { 00:20:33.222 "method": "sock_impl_set_options", 00:20:33.222 "params": { 00:20:33.222 "impl_name": "ssl", 00:20:33.222 "recv_buf_size": 4096, 00:20:33.222 "send_buf_size": 4096, 00:20:33.222 "enable_recv_pipe": true, 00:20:33.222 "enable_quickack": false, 00:20:33.222 "enable_placement_id": 0, 00:20:33.222 "enable_zerocopy_send_server": true, 00:20:33.222 "enable_zerocopy_send_client": false, 00:20:33.222 "zerocopy_threshold": 0, 00:20:33.222 "tls_version": 0, 00:20:33.222 "enable_ktls": false 00:20:33.222 } 00:20:33.222 } 00:20:33.222 ] 00:20:33.222 }, 00:20:33.222 { 00:20:33.222 "subsystem": "vmd", 00:20:33.222 "config": [] 00:20:33.222 }, 00:20:33.222 { 00:20:33.222 "subsystem": "accel", 00:20:33.222 "config": [ 00:20:33.222 { 00:20:33.222 "method": "accel_set_options", 00:20:33.222 "params": { 00:20:33.222 "small_cache_size": 128, 00:20:33.222 "large_cache_size": 16, 00:20:33.223 "task_count": 2048, 00:20:33.223 "sequence_count": 2048, 00:20:33.223 "buf_count": 2048 00:20:33.223 } 00:20:33.223 } 00:20:33.223 ] 00:20:33.223 }, 00:20:33.223 { 00:20:33.223 "subsystem": "bdev", 00:20:33.223 "config": [ 00:20:33.223 { 00:20:33.223 "method": "bdev_set_options", 00:20:33.223 "params": { 00:20:33.223 "bdev_io_pool_size": 65535, 00:20:33.223 "bdev_io_cache_size": 256, 00:20:33.223 "bdev_auto_examine": true, 00:20:33.223 "iobuf_small_cache_size": 128, 00:20:33.223 "iobuf_large_cache_size": 16 00:20:33.223 } 00:20:33.223 }, 00:20:33.223 { 00:20:33.223 "method": "bdev_raid_set_options", 00:20:33.223 "params": { 00:20:33.223 "process_window_size_kb": 1024 00:20:33.223 } 00:20:33.223 }, 00:20:33.223 { 00:20:33.223 "method": "bdev_iscsi_set_options", 00:20:33.223 "params": { 00:20:33.223 "timeout_sec": 30 00:20:33.223 } 00:20:33.223 }, 00:20:33.223 { 00:20:33.223 "method": "bdev_nvme_set_options", 00:20:33.223 "params": { 00:20:33.223 "action_on_timeout": "none", 00:20:33.223 "timeout_us": 0, 00:20:33.223 "timeout_admin_us": 0, 00:20:33.223 "keep_alive_timeout_ms": 10000, 00:20:33.223 "transport_retry_count": 4, 00:20:33.223 "arbitration_burst": 0, 00:20:33.223 "low_priority_weight": 0, 00:20:33.223 "medium_priority_weight": 0, 00:20:33.223 "high_priority_weight": 0, 00:20:33.223 "nvme_adminq_poll_period_us": 10000, 00:20:33.223 "nvme_ioq_poll_period_us": 0, 00:20:33.223 "io_queue_requests": 512, 00:20:33.223 "delay_cmd_submit": true, 00:20:33.223 "bdev_retry_count": 3, 00:20:33.223 "transport_ack_timeout": 0, 00:20:33.223 "ctrlr_loss_timeout_sec": 0, 00:20:33.223 "reconnect_delay_sec": 0, 00:20:33.223 "fast_io_fail_timeout_sec": 0, 00:20:33.223 "generate_uuids": false, 00:20:33.223 "transport_tos": 0, 00:20:33.223 "io_path_stat": false, 00:20:33.223 "allow_accel_sequence": false 00:20:33.223 } 00:20:33.223 }, 00:20:33.223 { 00:20:33.223 "method": "bdev_nvme_attach_controller", 00:20:33.223 "params": { 00:20:33.223 "name": "TLSTEST", 00:20:33.223 "trtype": "TCP", 00:20:33.223 "adrfam": "IPv4", 00:20:33.223 "traddr": "10.0.0.2", 00:20:33.223 "trsvcid": "4420", 00:20:33.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.223 "prchk_reftag": false, 00:20:33.223 "prchk_guard": false, 00:20:33.223 "ctrlr_loss_timeout_sec": 0, 00:20:33.223 "reconnect_delay_sec": 0, 00:20:33.223 "fast_io_fail_timeout_sec": 0, 00:20:33.223 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:33.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.223 "hdgst": false, 00:20:33.223 "ddgst": false 00:20:33.223 } 00:20:33.223 }, 00:20:33.223 { 00:20:33.223 "method": "bdev_nvme_set_hotplug", 00:20:33.223 "params": { 00:20:33.223 "period_us": 100000, 00:20:33.223 "enable": false 00:20:33.223 } 00:20:33.223 }, 00:20:33.223 { 00:20:33.223 "method": "bdev_wait_for_examine" 00:20:33.223 } 00:20:33.223 ] 00:20:33.223 }, 00:20:33.223 { 00:20:33.223 "subsystem": "nbd", 00:20:33.223 "config": [] 00:20:33.223 } 00:20:33.223 ] 00:20:33.223 }' 00:20:33.482 [2024-11-17 19:29:31.518585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:33.482 [2024-11-17 19:29:31.518685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231450 ] 00:20:33.482 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.482 [2024-11-17 19:29:31.576739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.482 [2024-11-17 19:29:31.661729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.740 [2024-11-17 19:29:31.818970] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.306 19:29:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.306 19:29:32 -- common/autotest_common.sh@862 -- # return 0 00:20:34.306 19:29:32 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:34.565 Running I/O for 10 seconds... 00:20:44.534 00:20:44.534 Latency(us) 00:20:44.534 [2024-11-17T18:29:42.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.534 [2024-11-17T18:29:42.801Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:44.534 Verification LBA range: start 0x0 length 0x2000 00:20:44.534 TLSTESTn1 : 10.02 4610.49 18.01 0.00 0.00 27724.27 5412.79 46409.20 00:20:44.535 [2024-11-17T18:29:42.802Z] =================================================================================================================== 00:20:44.535 [2024-11-17T18:29:42.802Z] Total : 4610.49 18.01 0.00 0.00 27724.27 5412.79 46409.20 00:20:44.535 0 00:20:44.535 19:29:42 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.535 19:29:42 -- target/tls.sh@223 -- # killprocess 1231450 00:20:44.535 19:29:42 -- common/autotest_common.sh@936 -- # '[' -z 1231450 ']' 00:20:44.535 19:29:42 -- common/autotest_common.sh@940 -- # kill -0 1231450 00:20:44.535 19:29:42 -- common/autotest_common.sh@941 -- # uname 00:20:44.535 19:29:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:44.535 19:29:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1231450 00:20:44.535 19:29:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:44.535 19:29:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:44.535 19:29:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1231450' 00:20:44.535 killing process with pid 1231450 00:20:44.535 19:29:42 -- common/autotest_common.sh@955 -- # kill 1231450 00:20:44.535 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.535 00:20:44.535 Latency(us) 00:20:44.535 [2024-11-17T18:29:42.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.535 [2024-11-17T18:29:42.802Z] =================================================================================================================== 00:20:44.535 [2024-11-17T18:29:42.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.535 19:29:42 -- common/autotest_common.sh@960 -- # wait 1231450 00:20:44.793 19:29:42 -- target/tls.sh@224 -- # killprocess 1231295 00:20:44.793 19:29:42 -- common/autotest_common.sh@936 -- # '[' -z 1231295 ']' 00:20:44.793 19:29:42 -- common/autotest_common.sh@940 -- # kill -0 1231295 00:20:44.793 19:29:42 -- common/autotest_common.sh@941 -- # uname 00:20:44.793 19:29:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:44.793 19:29:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1231295 00:20:44.793 19:29:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:44.793 19:29:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:44.793 19:29:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1231295' 00:20:44.793 killing process with pid 1231295 00:20:44.793 19:29:43 -- common/autotest_common.sh@955 -- # kill 1231295 00:20:44.793 19:29:43 -- common/autotest_common.sh@960 -- # wait 1231295 00:20:45.052 19:29:43 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:45.052 19:29:43 -- target/tls.sh@227 -- # cleanup 00:20:45.052 19:29:43 -- target/tls.sh@15 -- # process_shm --id 0 00:20:45.052 19:29:43 -- common/autotest_common.sh@806 -- # type=--id 00:20:45.052 19:29:43 -- common/autotest_common.sh@807 -- # id=0 00:20:45.052 19:29:43 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:45.052 19:29:43 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:45.052 19:29:43 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:45.052 19:29:43 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:45.052 19:29:43 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:45.052 19:29:43 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:45.052 nvmf_trace.0 00:20:45.052 19:29:43 -- common/autotest_common.sh@821 -- # return 0 00:20:45.052 19:29:43 -- target/tls.sh@16 -- # killprocess 1231450 00:20:45.052 19:29:43 -- common/autotest_common.sh@936 -- # '[' -z 1231450 ']' 00:20:45.052 19:29:43 -- common/autotest_common.sh@940 -- # kill -0 1231450 00:20:45.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1231450) - No such process 00:20:45.052 19:29:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1231450 is not found' 00:20:45.052 Process with pid 1231450 is not found 00:20:45.052 19:29:43 -- target/tls.sh@17 -- # nvmftestfini 00:20:45.052 19:29:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:45.052 19:29:43 -- nvmf/common.sh@116 -- # sync 00:20:45.052 19:29:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:45.052 19:29:43 -- nvmf/common.sh@119 -- # set +e 00:20:45.052 19:29:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:45.052 19:29:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:45.052 rmmod nvme_tcp 00:20:45.311 rmmod nvme_fabrics 00:20:45.311 rmmod nvme_keyring 00:20:45.311 19:29:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:45.311 19:29:43 -- nvmf/common.sh@123 -- # set -e 00:20:45.311 19:29:43 -- nvmf/common.sh@124 -- # return 0 00:20:45.311 19:29:43 -- nvmf/common.sh@477 -- # '[' -n 1231295 ']' 00:20:45.311 19:29:43 -- nvmf/common.sh@478 -- # killprocess 1231295 00:20:45.311 19:29:43 -- common/autotest_common.sh@936 -- # '[' -z 1231295 ']' 00:20:45.311 19:29:43 -- common/autotest_common.sh@940 -- # kill -0 1231295 00:20:45.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1231295) - No such process 00:20:45.311 19:29:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1231295 is not found' 00:20:45.311 Process with pid 1231295 is not found 00:20:45.311 19:29:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:45.311 19:29:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:45.311 19:29:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:45.311 19:29:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.311 19:29:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:45.311 19:29:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.311 19:29:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.311 19:29:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.213 19:29:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:47.213 19:29:45 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:47.213 00:20:47.213 real 1m14.324s 00:20:47.213 user 2m1.805s 00:20:47.213 sys 0m22.041s 00:20:47.213 19:29:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:47.213 19:29:45 -- common/autotest_common.sh@10 -- # set +x 00:20:47.213 ************************************ 00:20:47.213 END TEST nvmf_tls 00:20:47.213 ************************************ 00:20:47.213 19:29:45 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:47.213 19:29:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:47.213 19:29:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:47.213 19:29:45 -- common/autotest_common.sh@10 -- # set +x 00:20:47.213 ************************************ 00:20:47.213 START TEST nvmf_fips 00:20:47.213 ************************************ 00:20:47.213 19:29:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:47.473 * Looking for test storage... 00:20:47.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:47.473 19:29:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:47.473 19:29:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:47.473 19:29:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:47.473 19:29:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:47.473 19:29:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:47.473 19:29:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:47.473 19:29:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:47.473 19:29:45 -- scripts/common.sh@335 -- # IFS=.-: 00:20:47.473 19:29:45 -- scripts/common.sh@335 -- # read -ra ver1 00:20:47.473 19:29:45 -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.473 19:29:45 -- scripts/common.sh@336 -- # read -ra ver2 00:20:47.473 19:29:45 -- scripts/common.sh@337 -- # local 'op=<' 00:20:47.473 19:29:45 -- scripts/common.sh@339 -- # ver1_l=2 00:20:47.473 19:29:45 -- scripts/common.sh@340 -- # ver2_l=1 00:20:47.473 19:29:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:47.473 19:29:45 -- scripts/common.sh@343 -- # case "$op" in 00:20:47.473 19:29:45 -- scripts/common.sh@344 -- # : 1 00:20:47.473 19:29:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:47.473 19:29:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.473 19:29:45 -- scripts/common.sh@364 -- # decimal 1 00:20:47.473 19:29:45 -- scripts/common.sh@352 -- # local d=1 00:20:47.473 19:29:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.473 19:29:45 -- scripts/common.sh@354 -- # echo 1 00:20:47.473 19:29:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:47.473 19:29:45 -- scripts/common.sh@365 -- # decimal 2 00:20:47.473 19:29:45 -- scripts/common.sh@352 -- # local d=2 00:20:47.473 19:29:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.473 19:29:45 -- scripts/common.sh@354 -- # echo 2 00:20:47.473 19:29:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:47.473 19:29:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:47.473 19:29:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:47.473 19:29:45 -- scripts/common.sh@367 -- # return 0 00:20:47.473 19:29:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.473 19:29:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.473 --rc genhtml_branch_coverage=1 00:20:47.473 --rc genhtml_function_coverage=1 00:20:47.473 --rc genhtml_legend=1 00:20:47.473 --rc geninfo_all_blocks=1 00:20:47.473 --rc geninfo_unexecuted_blocks=1 00:20:47.473 00:20:47.473 ' 00:20:47.473 19:29:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.473 --rc genhtml_branch_coverage=1 00:20:47.473 --rc genhtml_function_coverage=1 00:20:47.473 --rc genhtml_legend=1 00:20:47.473 --rc geninfo_all_blocks=1 00:20:47.473 --rc geninfo_unexecuted_blocks=1 00:20:47.473 00:20:47.473 ' 00:20:47.473 19:29:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.473 --rc genhtml_branch_coverage=1 00:20:47.473 --rc genhtml_function_coverage=1 00:20:47.473 --rc genhtml_legend=1 00:20:47.473 --rc geninfo_all_blocks=1 00:20:47.473 --rc geninfo_unexecuted_blocks=1 00:20:47.473 00:20:47.473 ' 00:20:47.473 19:29:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.473 --rc genhtml_branch_coverage=1 00:20:47.473 --rc genhtml_function_coverage=1 00:20:47.473 --rc genhtml_legend=1 00:20:47.473 --rc geninfo_all_blocks=1 00:20:47.473 --rc geninfo_unexecuted_blocks=1 00:20:47.473 00:20:47.473 ' 00:20:47.473 19:29:45 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.473 19:29:45 -- nvmf/common.sh@7 -- # uname -s 00:20:47.473 19:29:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.473 19:29:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.473 19:29:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.473 19:29:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.473 19:29:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.473 19:29:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.473 19:29:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.473 19:29:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.473 19:29:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.473 19:29:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.473 19:29:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.473 19:29:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.473 19:29:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.473 19:29:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.473 19:29:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.473 19:29:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.473 19:29:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.473 19:29:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.473 19:29:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.473 19:29:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.473 19:29:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.473 19:29:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.473 19:29:45 -- paths/export.sh@5 -- # export PATH 00:20:47.473 19:29:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.473 19:29:45 -- nvmf/common.sh@46 -- # : 0 00:20:47.473 19:29:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:47.473 19:29:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:47.473 19:29:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:47.473 19:29:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.473 19:29:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.473 19:29:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:47.473 19:29:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:47.473 19:29:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:47.473 19:29:45 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:47.473 19:29:45 -- fips/fips.sh@89 -- # check_openssl_version 00:20:47.473 19:29:45 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:47.473 19:29:45 -- fips/fips.sh@85 -- # openssl version 00:20:47.473 19:29:45 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:47.473 19:29:45 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:20:47.473 19:29:45 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:47.473 19:29:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:47.474 19:29:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:47.474 19:29:45 -- scripts/common.sh@335 -- # IFS=.-: 00:20:47.474 19:29:45 -- scripts/common.sh@335 -- # read -ra ver1 00:20:47.474 19:29:45 -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.474 19:29:45 -- scripts/common.sh@336 -- # read -ra ver2 00:20:47.474 19:29:45 -- scripts/common.sh@337 -- # local 'op=>=' 00:20:47.474 19:29:45 -- scripts/common.sh@339 -- # ver1_l=3 00:20:47.474 19:29:45 -- scripts/common.sh@340 -- # ver2_l=3 00:20:47.474 19:29:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:47.474 19:29:45 -- scripts/common.sh@343 -- # case "$op" in 00:20:47.474 19:29:45 -- scripts/common.sh@347 -- # : 1 00:20:47.474 19:29:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:47.474 19:29:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.474 19:29:45 -- scripts/common.sh@364 -- # decimal 3 00:20:47.474 19:29:45 -- scripts/common.sh@352 -- # local d=3 00:20:47.474 19:29:45 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:47.474 19:29:45 -- scripts/common.sh@354 -- # echo 3 00:20:47.474 19:29:45 -- scripts/common.sh@364 -- # ver1[v]=3 00:20:47.474 19:29:45 -- scripts/common.sh@365 -- # decimal 3 00:20:47.474 19:29:45 -- scripts/common.sh@352 -- # local d=3 00:20:47.474 19:29:45 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:47.474 19:29:45 -- scripts/common.sh@354 -- # echo 3 00:20:47.474 19:29:45 -- scripts/common.sh@365 -- # ver2[v]=3 00:20:47.474 19:29:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:47.474 19:29:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:47.474 19:29:45 -- scripts/common.sh@363 -- # (( v++ )) 00:20:47.474 19:29:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.474 19:29:45 -- scripts/common.sh@364 -- # decimal 1 00:20:47.474 19:29:45 -- scripts/common.sh@352 -- # local d=1 00:20:47.474 19:29:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.474 19:29:45 -- scripts/common.sh@354 -- # echo 1 00:20:47.474 19:29:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:47.474 19:29:45 -- scripts/common.sh@365 -- # decimal 0 00:20:47.474 19:29:45 -- scripts/common.sh@352 -- # local d=0 00:20:47.474 19:29:45 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:47.474 19:29:45 -- scripts/common.sh@354 -- # echo 0 00:20:47.474 19:29:45 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:47.474 19:29:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:47.474 19:29:45 -- scripts/common.sh@366 -- # return 0 00:20:47.474 19:29:45 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:47.474 19:29:45 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:47.474 19:29:45 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:47.474 19:29:45 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:47.474 19:29:45 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:47.474 19:29:45 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:47.474 19:29:45 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:47.474 19:29:45 -- fips/fips.sh@113 -- # build_openssl_config 00:20:47.474 19:29:45 -- fips/fips.sh@37 -- # cat 00:20:47.474 19:29:45 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:47.474 19:29:45 -- fips/fips.sh@58 -- # cat - 00:20:47.474 19:29:45 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:47.474 19:29:45 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:47.474 19:29:45 -- fips/fips.sh@116 -- # mapfile -t providers 00:20:47.474 19:29:45 -- fips/fips.sh@116 -- # openssl list -providers 00:20:47.474 19:29:45 -- fips/fips.sh@116 -- # grep name 00:20:47.474 19:29:45 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:47.474 19:29:45 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:47.474 19:29:45 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:47.474 19:29:45 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:47.474 19:29:45 -- common/autotest_common.sh@650 -- # local es=0 00:20:47.474 19:29:45 -- fips/fips.sh@127 -- # : 00:20:47.474 19:29:45 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:47.474 19:29:45 -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:47.474 19:29:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.474 19:29:45 -- common/autotest_common.sh@642 -- # type -t openssl 00:20:47.474 19:29:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.474 19:29:45 -- common/autotest_common.sh@644 -- # type -P openssl 00:20:47.474 19:29:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.474 19:29:45 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:47.474 19:29:45 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:47.474 19:29:45 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:47.474 Error setting digest 00:20:47.474 40826EBC077F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:47.474 40826EBC077F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:47.474 19:29:45 -- common/autotest_common.sh@653 -- # es=1 00:20:47.474 19:29:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:47.474 19:29:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:47.474 19:29:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:47.474 19:29:45 -- fips/fips.sh@130 -- # nvmftestinit 00:20:47.474 19:29:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:47.474 19:29:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.474 19:29:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:47.474 19:29:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:47.474 19:29:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:47.474 19:29:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.474 19:29:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.474 19:29:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.474 19:29:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:47.474 19:29:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:47.474 19:29:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:47.474 19:29:45 -- common/autotest_common.sh@10 -- # set +x 00:20:50.064 19:29:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:50.064 19:29:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:50.064 19:29:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:50.064 19:29:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:50.064 19:29:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:50.064 19:29:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:50.064 19:29:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:50.064 19:29:47 -- nvmf/common.sh@294 -- # net_devs=() 00:20:50.064 19:29:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:50.064 19:29:47 -- nvmf/common.sh@295 -- # e810=() 00:20:50.064 19:29:47 -- nvmf/common.sh@295 -- # local -ga e810 00:20:50.064 19:29:47 -- nvmf/common.sh@296 -- # x722=() 00:20:50.064 19:29:47 -- nvmf/common.sh@296 -- # local -ga x722 00:20:50.064 19:29:47 -- nvmf/common.sh@297 -- # mlx=() 00:20:50.064 19:29:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:50.064 19:29:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.064 19:29:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:50.064 19:29:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:50.064 19:29:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:50.064 19:29:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:50.064 19:29:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:50.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:50.064 19:29:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:50.064 19:29:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:50.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:50.064 19:29:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:50.064 19:29:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:50.064 19:29:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.064 19:29:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:50.064 19:29:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.064 19:29:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:50.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:50.064 19:29:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.064 19:29:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:50.064 19:29:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.064 19:29:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:50.064 19:29:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.064 19:29:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:50.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:50.064 19:29:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.064 19:29:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:50.064 19:29:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:50.064 19:29:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:50.064 19:29:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.064 19:29:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.064 19:29:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.064 19:29:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:50.064 19:29:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.064 19:29:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.064 19:29:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:50.064 19:29:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.064 19:29:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.064 19:29:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:50.064 19:29:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:50.064 19:29:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.064 19:29:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.064 19:29:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.064 19:29:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.064 19:29:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:50.064 19:29:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.064 19:29:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.064 19:29:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.064 19:29:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:50.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:20:50.064 00:20:50.064 --- 10.0.0.2 ping statistics --- 00:20:50.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.064 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:20:50.064 19:29:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:20:50.064 00:20:50.064 --- 10.0.0.1 ping statistics --- 00:20:50.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.064 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:20:50.064 19:29:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.064 19:29:47 -- nvmf/common.sh@410 -- # return 0 00:20:50.064 19:29:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:50.064 19:29:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.064 19:29:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:50.064 19:29:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.064 19:29:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:50.064 19:29:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:50.064 19:29:47 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:50.064 19:29:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:50.064 19:29:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:50.064 19:29:47 -- common/autotest_common.sh@10 -- # set +x 00:20:50.064 19:29:47 -- nvmf/common.sh@469 -- # nvmfpid=1234883 00:20:50.064 19:29:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:50.064 19:29:47 -- nvmf/common.sh@470 -- # waitforlisten 1234883 00:20:50.064 19:29:47 -- common/autotest_common.sh@829 -- # '[' -z 1234883 ']' 00:20:50.064 19:29:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.064 19:29:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.064 19:29:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.065 19:29:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.065 19:29:47 -- common/autotest_common.sh@10 -- # set +x 00:20:50.065 [2024-11-17 19:29:48.054501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:50.065 [2024-11-17 19:29:48.054601] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.065 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.065 [2024-11-17 19:29:48.123228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.065 [2024-11-17 19:29:48.214476] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:50.065 [2024-11-17 19:29:48.214684] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.065 [2024-11-17 19:29:48.214706] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.065 [2024-11-17 19:29:48.214722] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.065 [2024-11-17 19:29:48.214756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.006 19:29:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.006 19:29:49 -- common/autotest_common.sh@862 -- # return 0 00:20:51.006 19:29:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:51.006 19:29:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.006 19:29:49 -- common/autotest_common.sh@10 -- # set +x 00:20:51.006 19:29:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.006 19:29:49 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:51.006 19:29:49 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:51.006 19:29:49 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.006 19:29:49 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:51.006 19:29:49 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.006 19:29:49 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.006 19:29:49 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.006 19:29:49 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.264 [2024-11-17 19:29:49.292923] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.264 [2024-11-17 19:29:49.308900] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.264 [2024-11-17 19:29:49.309151] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.264 malloc0 00:20:51.264 19:29:49 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.264 19:29:49 -- fips/fips.sh@147 -- # bdevperf_pid=1235074 00:20:51.264 19:29:49 -- fips/fips.sh@148 -- # waitforlisten 1235074 /var/tmp/bdevperf.sock 00:20:51.264 19:29:49 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.264 19:29:49 -- common/autotest_common.sh@829 -- # '[' -z 1235074 ']' 00:20:51.264 19:29:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.264 19:29:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.264 19:29:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.264 19:29:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.264 19:29:49 -- common/autotest_common.sh@10 -- # set +x 00:20:51.264 [2024-11-17 19:29:49.429964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:51.264 [2024-11-17 19:29:49.430071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235074 ] 00:20:51.264 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.264 [2024-11-17 19:29:49.488351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.524 [2024-11-17 19:29:49.574037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.461 19:29:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.461 19:29:50 -- common/autotest_common.sh@862 -- # return 0 00:20:52.461 19:29:50 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:52.461 [2024-11-17 19:29:50.596712] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.461 TLSTESTn1 00:20:52.461 19:29:50 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:52.720 Running I/O for 10 seconds... 00:21:02.704 00:21:02.704 Latency(us) 00:21:02.704 [2024-11-17T18:30:00.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.704 [2024-11-17T18:30:00.971Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.704 Verification LBA range: start 0x0 length 0x2000 00:21:02.704 TLSTESTn1 : 10.01 4591.05 17.93 0.00 0.00 27846.14 4271.98 44661.57 00:21:02.704 [2024-11-17T18:30:00.971Z] =================================================================================================================== 00:21:02.704 [2024-11-17T18:30:00.971Z] Total : 4591.05 17.93 0.00 0.00 27846.14 4271.98 44661.57 00:21:02.704 0 00:21:02.704 19:30:00 -- fips/fips.sh@1 -- # cleanup 00:21:02.704 19:30:00 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:02.704 19:30:00 -- common/autotest_common.sh@806 -- # type=--id 00:21:02.704 19:30:00 -- common/autotest_common.sh@807 -- # id=0 00:21:02.704 19:30:00 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:02.704 19:30:00 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:02.704 19:30:00 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:02.704 19:30:00 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:02.704 19:30:00 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:02.704 19:30:00 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:02.704 nvmf_trace.0 00:21:02.704 19:30:00 -- common/autotest_common.sh@821 -- # return 0 00:21:02.704 19:30:00 -- fips/fips.sh@16 -- # killprocess 1235074 00:21:02.704 19:30:00 -- common/autotest_common.sh@936 -- # '[' -z 1235074 ']' 00:21:02.704 19:30:00 -- common/autotest_common.sh@940 -- # kill -0 1235074 00:21:02.704 19:30:00 -- common/autotest_common.sh@941 -- # uname 00:21:02.704 19:30:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.704 19:30:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1235074 00:21:02.704 19:30:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:02.704 19:30:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:02.704 19:30:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1235074' 00:21:02.704 killing process with pid 1235074 00:21:02.704 19:30:00 -- common/autotest_common.sh@955 -- # kill 1235074 00:21:02.704 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.704 00:21:02.704 Latency(us) 00:21:02.704 [2024-11-17T18:30:00.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.704 [2024-11-17T18:30:00.971Z] =================================================================================================================== 00:21:02.704 [2024-11-17T18:30:00.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.704 19:30:00 -- common/autotest_common.sh@960 -- # wait 1235074 00:21:02.962 19:30:01 -- fips/fips.sh@17 -- # nvmftestfini 00:21:02.962 19:30:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:02.962 19:30:01 -- nvmf/common.sh@116 -- # sync 00:21:02.962 19:30:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:02.962 19:30:01 -- nvmf/common.sh@119 -- # set +e 00:21:02.962 19:30:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:02.962 19:30:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:02.962 rmmod nvme_tcp 00:21:02.962 rmmod nvme_fabrics 00:21:02.962 rmmod nvme_keyring 00:21:02.962 19:30:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:02.962 19:30:01 -- nvmf/common.sh@123 -- # set -e 00:21:02.962 19:30:01 -- nvmf/common.sh@124 -- # return 0 00:21:02.962 19:30:01 -- nvmf/common.sh@477 -- # '[' -n 1234883 ']' 00:21:02.962 19:30:01 -- nvmf/common.sh@478 -- # killprocess 1234883 00:21:02.962 19:30:01 -- common/autotest_common.sh@936 -- # '[' -z 1234883 ']' 00:21:02.963 19:30:01 -- common/autotest_common.sh@940 -- # kill -0 1234883 00:21:02.963 19:30:01 -- common/autotest_common.sh@941 -- # uname 00:21:02.963 19:30:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.963 19:30:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1234883 00:21:03.220 19:30:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:03.220 19:30:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:03.220 19:30:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1234883' 00:21:03.220 killing process with pid 1234883 00:21:03.220 19:30:01 -- common/autotest_common.sh@955 -- # kill 1234883 00:21:03.220 19:30:01 -- common/autotest_common.sh@960 -- # wait 1234883 00:21:03.479 19:30:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:03.479 19:30:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:03.479 19:30:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:03.479 19:30:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.479 19:30:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:03.479 19:30:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.479 19:30:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.479 19:30:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.389 19:30:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:05.389 19:30:03 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.389 00:21:05.389 real 0m18.098s 00:21:05.389 user 0m23.462s 00:21:05.389 sys 0m5.868s 00:21:05.389 19:30:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:05.389 19:30:03 -- common/autotest_common.sh@10 -- # set +x 00:21:05.389 ************************************ 00:21:05.389 END TEST nvmf_fips 00:21:05.389 ************************************ 00:21:05.389 19:30:03 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:05.389 19:30:03 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:05.389 19:30:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:05.389 19:30:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.389 19:30:03 -- common/autotest_common.sh@10 -- # set +x 00:21:05.389 ************************************ 00:21:05.389 START TEST nvmf_fuzz 00:21:05.389 ************************************ 00:21:05.389 19:30:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:05.389 * Looking for test storage... 00:21:05.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.389 19:30:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:05.389 19:30:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:05.389 19:30:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:05.650 19:30:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:05.650 19:30:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:05.650 19:30:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:05.650 19:30:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:05.650 19:30:03 -- scripts/common.sh@335 -- # IFS=.-: 00:21:05.650 19:30:03 -- scripts/common.sh@335 -- # read -ra ver1 00:21:05.650 19:30:03 -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.650 19:30:03 -- scripts/common.sh@336 -- # read -ra ver2 00:21:05.650 19:30:03 -- scripts/common.sh@337 -- # local 'op=<' 00:21:05.650 19:30:03 -- scripts/common.sh@339 -- # ver1_l=2 00:21:05.650 19:30:03 -- scripts/common.sh@340 -- # ver2_l=1 00:21:05.650 19:30:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:05.650 19:30:03 -- scripts/common.sh@343 -- # case "$op" in 00:21:05.650 19:30:03 -- scripts/common.sh@344 -- # : 1 00:21:05.650 19:30:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:05.650 19:30:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.650 19:30:03 -- scripts/common.sh@364 -- # decimal 1 00:21:05.650 19:30:03 -- scripts/common.sh@352 -- # local d=1 00:21:05.650 19:30:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.650 19:30:03 -- scripts/common.sh@354 -- # echo 1 00:21:05.650 19:30:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:05.650 19:30:03 -- scripts/common.sh@365 -- # decimal 2 00:21:05.650 19:30:03 -- scripts/common.sh@352 -- # local d=2 00:21:05.650 19:30:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.650 19:30:03 -- scripts/common.sh@354 -- # echo 2 00:21:05.650 19:30:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:05.650 19:30:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:05.650 19:30:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:05.650 19:30:03 -- scripts/common.sh@367 -- # return 0 00:21:05.650 19:30:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.650 19:30:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:05.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.650 --rc genhtml_branch_coverage=1 00:21:05.650 --rc genhtml_function_coverage=1 00:21:05.650 --rc genhtml_legend=1 00:21:05.650 --rc geninfo_all_blocks=1 00:21:05.650 --rc geninfo_unexecuted_blocks=1 00:21:05.650 00:21:05.650 ' 00:21:05.650 19:30:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:05.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.650 --rc genhtml_branch_coverage=1 00:21:05.650 --rc genhtml_function_coverage=1 00:21:05.650 --rc genhtml_legend=1 00:21:05.650 --rc geninfo_all_blocks=1 00:21:05.650 --rc geninfo_unexecuted_blocks=1 00:21:05.650 00:21:05.650 ' 00:21:05.650 19:30:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:05.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.650 --rc genhtml_branch_coverage=1 00:21:05.650 --rc genhtml_function_coverage=1 00:21:05.650 --rc genhtml_legend=1 00:21:05.650 --rc geninfo_all_blocks=1 00:21:05.650 --rc geninfo_unexecuted_blocks=1 00:21:05.650 00:21:05.650 ' 00:21:05.650 19:30:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:05.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.650 --rc genhtml_branch_coverage=1 00:21:05.650 --rc genhtml_function_coverage=1 00:21:05.650 --rc genhtml_legend=1 00:21:05.650 --rc geninfo_all_blocks=1 00:21:05.651 --rc geninfo_unexecuted_blocks=1 00:21:05.651 00:21:05.651 ' 00:21:05.651 19:30:03 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.651 19:30:03 -- nvmf/common.sh@7 -- # uname -s 00:21:05.651 19:30:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.651 19:30:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.651 19:30:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.651 19:30:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.651 19:30:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.651 19:30:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.651 19:30:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.651 19:30:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.651 19:30:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.651 19:30:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.651 19:30:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.651 19:30:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.651 19:30:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.651 19:30:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.651 19:30:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.651 19:30:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.651 19:30:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.651 19:30:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.651 19:30:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.651 19:30:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.651 19:30:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.651 19:30:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.651 19:30:03 -- paths/export.sh@5 -- # export PATH 00:21:05.651 19:30:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.651 19:30:03 -- nvmf/common.sh@46 -- # : 0 00:21:05.651 19:30:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:05.651 19:30:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:05.651 19:30:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:05.651 19:30:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.651 19:30:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.651 19:30:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:05.651 19:30:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:05.651 19:30:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:05.651 19:30:03 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:05.651 19:30:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:05.651 19:30:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.651 19:30:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:05.651 19:30:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:05.651 19:30:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:05.651 19:30:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.651 19:30:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.651 19:30:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.651 19:30:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:05.651 19:30:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:05.651 19:30:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:05.651 19:30:03 -- common/autotest_common.sh@10 -- # set +x 00:21:08.185 19:30:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:08.185 19:30:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:08.185 19:30:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:08.185 19:30:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:08.185 19:30:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:08.185 19:30:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:08.185 19:30:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:08.185 19:30:05 -- nvmf/common.sh@294 -- # net_devs=() 00:21:08.185 19:30:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:08.185 19:30:05 -- nvmf/common.sh@295 -- # e810=() 00:21:08.185 19:30:05 -- nvmf/common.sh@295 -- # local -ga e810 00:21:08.185 19:30:05 -- nvmf/common.sh@296 -- # x722=() 00:21:08.185 19:30:05 -- nvmf/common.sh@296 -- # local -ga x722 00:21:08.185 19:30:05 -- nvmf/common.sh@297 -- # mlx=() 00:21:08.185 19:30:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:08.185 19:30:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.185 19:30:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:08.185 19:30:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:08.185 19:30:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:08.185 19:30:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:08.185 19:30:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:08.185 19:30:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:08.185 19:30:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:08.185 19:30:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:08.185 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:08.185 19:30:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:08.185 19:30:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:08.186 19:30:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:08.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:08.186 19:30:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:08.186 19:30:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:08.186 19:30:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.186 19:30:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:08.186 19:30:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.186 19:30:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:08.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:08.186 19:30:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.186 19:30:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:08.186 19:30:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.186 19:30:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:08.186 19:30:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.186 19:30:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:08.186 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:08.186 19:30:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.186 19:30:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:08.186 19:30:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:08.186 19:30:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:08.186 19:30:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.186 19:30:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.186 19:30:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.186 19:30:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:08.186 19:30:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.186 19:30:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.186 19:30:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:08.186 19:30:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.186 19:30:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.186 19:30:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:08.186 19:30:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:08.186 19:30:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.186 19:30:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.186 19:30:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.186 19:30:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.186 19:30:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:08.186 19:30:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.186 19:30:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.186 19:30:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.186 19:30:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:08.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:21:08.186 00:21:08.186 --- 10.0.0.2 ping statistics --- 00:21:08.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.186 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:08.186 19:30:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:08.186 00:21:08.186 --- 10.0.0.1 ping statistics --- 00:21:08.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.186 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:08.186 19:30:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.186 19:30:05 -- nvmf/common.sh@410 -- # return 0 00:21:08.186 19:30:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:08.186 19:30:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.186 19:30:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:08.186 19:30:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.186 19:30:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:08.186 19:30:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:08.186 19:30:06 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1238521 00:21:08.186 19:30:06 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:08.186 19:30:06 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:08.186 19:30:06 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1238521 00:21:08.186 19:30:06 -- common/autotest_common.sh@829 -- # '[' -z 1238521 ']' 00:21:08.186 19:30:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.186 19:30:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.186 19:30:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.186 19:30:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.186 19:30:06 -- common/autotest_common.sh@10 -- # set +x 00:21:09.123 19:30:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.123 19:30:07 -- common/autotest_common.sh@862 -- # return 0 00:21:09.123 19:30:07 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.123 19:30:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.123 19:30:07 -- common/autotest_common.sh@10 -- # set +x 00:21:09.123 19:30:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.123 19:30:07 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:09.123 19:30:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.123 19:30:07 -- common/autotest_common.sh@10 -- # set +x 00:21:09.123 Malloc0 00:21:09.123 19:30:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.123 19:30:07 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:09.123 19:30:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.123 19:30:07 -- common/autotest_common.sh@10 -- # set +x 00:21:09.123 19:30:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.123 19:30:07 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:09.123 19:30:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.123 19:30:07 -- common/autotest_common.sh@10 -- # set +x 00:21:09.123 19:30:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.123 19:30:07 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.123 19:30:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.123 19:30:07 -- common/autotest_common.sh@10 -- # set +x 00:21:09.123 19:30:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.123 19:30:07 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:09.123 19:30:07 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:41.229 Fuzzing completed. Shutting down the fuzz application 00:21:41.229 00:21:41.229 Dumping successful admin opcodes: 00:21:41.229 8, 9, 10, 24, 00:21:41.229 Dumping successful io opcodes: 00:21:41.229 0, 9, 00:21:41.229 NS: 0x200003aeff00 I/O qp, Total commands completed: 454873, total successful commands: 2639, random_seed: 1693593216 00:21:41.229 NS: 0x200003aeff00 admin qp, Total commands completed: 56496, total successful commands: 448, random_seed: 524828864 00:21:41.229 19:30:37 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:41.229 Fuzzing completed. Shutting down the fuzz application 00:21:41.229 00:21:41.229 Dumping successful admin opcodes: 00:21:41.229 24, 00:21:41.229 Dumping successful io opcodes: 00:21:41.229 00:21:41.229 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2022444547 00:21:41.229 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2022571078 00:21:41.229 19:30:38 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.229 19:30:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.229 19:30:38 -- common/autotest_common.sh@10 -- # set +x 00:21:41.229 19:30:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.229 19:30:38 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:41.229 19:30:38 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:41.229 19:30:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:41.229 19:30:38 -- nvmf/common.sh@116 -- # sync 00:21:41.229 19:30:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:41.229 19:30:38 -- nvmf/common.sh@119 -- # set +e 00:21:41.229 19:30:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:41.229 19:30:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:41.229 rmmod nvme_tcp 00:21:41.229 rmmod nvme_fabrics 00:21:41.229 rmmod nvme_keyring 00:21:41.229 19:30:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:41.229 19:30:39 -- nvmf/common.sh@123 -- # set -e 00:21:41.229 19:30:39 -- nvmf/common.sh@124 -- # return 0 00:21:41.229 19:30:39 -- nvmf/common.sh@477 -- # '[' -n 1238521 ']' 00:21:41.229 19:30:39 -- nvmf/common.sh@478 -- # killprocess 1238521 00:21:41.229 19:30:39 -- common/autotest_common.sh@936 -- # '[' -z 1238521 ']' 00:21:41.229 19:30:39 -- common/autotest_common.sh@940 -- # kill -0 1238521 00:21:41.229 19:30:39 -- common/autotest_common.sh@941 -- # uname 00:21:41.229 19:30:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:41.229 19:30:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1238521 00:21:41.229 19:30:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:41.229 19:30:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:41.229 19:30:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1238521' 00:21:41.229 killing process with pid 1238521 00:21:41.229 19:30:39 -- common/autotest_common.sh@955 -- # kill 1238521 00:21:41.229 19:30:39 -- common/autotest_common.sh@960 -- # wait 1238521 00:21:41.229 19:30:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:41.229 19:30:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:41.229 19:30:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:41.229 19:30:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.229 19:30:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:41.229 19:30:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.229 19:30:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.229 19:30:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.138 19:30:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:43.138 19:30:41 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:43.138 00:21:43.138 real 0m37.827s 00:21:43.138 user 0m51.725s 00:21:43.138 sys 0m15.425s 00:21:43.138 19:30:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:43.138 19:30:41 -- common/autotest_common.sh@10 -- # set +x 00:21:43.138 ************************************ 00:21:43.138 END TEST nvmf_fuzz 00:21:43.138 ************************************ 00:21:43.397 19:30:41 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:43.397 19:30:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:43.397 19:30:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.397 19:30:41 -- common/autotest_common.sh@10 -- # set +x 00:21:43.397 ************************************ 00:21:43.397 START TEST nvmf_multiconnection 00:21:43.397 ************************************ 00:21:43.397 19:30:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:43.397 * Looking for test storage... 00:21:43.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:43.397 19:30:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:43.397 19:30:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:43.397 19:30:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:43.397 19:30:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:43.397 19:30:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:43.397 19:30:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:43.397 19:30:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:43.397 19:30:41 -- scripts/common.sh@335 -- # IFS=.-: 00:21:43.397 19:30:41 -- scripts/common.sh@335 -- # read -ra ver1 00:21:43.397 19:30:41 -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.397 19:30:41 -- scripts/common.sh@336 -- # read -ra ver2 00:21:43.397 19:30:41 -- scripts/common.sh@337 -- # local 'op=<' 00:21:43.397 19:30:41 -- scripts/common.sh@339 -- # ver1_l=2 00:21:43.397 19:30:41 -- scripts/common.sh@340 -- # ver2_l=1 00:21:43.397 19:30:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:43.397 19:30:41 -- scripts/common.sh@343 -- # case "$op" in 00:21:43.397 19:30:41 -- scripts/common.sh@344 -- # : 1 00:21:43.397 19:30:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:43.397 19:30:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.397 19:30:41 -- scripts/common.sh@364 -- # decimal 1 00:21:43.397 19:30:41 -- scripts/common.sh@352 -- # local d=1 00:21:43.397 19:30:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.397 19:30:41 -- scripts/common.sh@354 -- # echo 1 00:21:43.397 19:30:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:43.397 19:30:41 -- scripts/common.sh@365 -- # decimal 2 00:21:43.397 19:30:41 -- scripts/common.sh@352 -- # local d=2 00:21:43.397 19:30:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.397 19:30:41 -- scripts/common.sh@354 -- # echo 2 00:21:43.397 19:30:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:43.397 19:30:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:43.397 19:30:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:43.397 19:30:41 -- scripts/common.sh@367 -- # return 0 00:21:43.397 19:30:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.397 19:30:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:43.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.397 --rc genhtml_branch_coverage=1 00:21:43.397 --rc genhtml_function_coverage=1 00:21:43.397 --rc genhtml_legend=1 00:21:43.397 --rc geninfo_all_blocks=1 00:21:43.397 --rc geninfo_unexecuted_blocks=1 00:21:43.397 00:21:43.397 ' 00:21:43.397 19:30:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:43.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.397 --rc genhtml_branch_coverage=1 00:21:43.397 --rc genhtml_function_coverage=1 00:21:43.397 --rc genhtml_legend=1 00:21:43.397 --rc geninfo_all_blocks=1 00:21:43.397 --rc geninfo_unexecuted_blocks=1 00:21:43.397 00:21:43.397 ' 00:21:43.397 19:30:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:43.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.397 --rc genhtml_branch_coverage=1 00:21:43.397 --rc genhtml_function_coverage=1 00:21:43.397 --rc genhtml_legend=1 00:21:43.397 --rc geninfo_all_blocks=1 00:21:43.397 --rc geninfo_unexecuted_blocks=1 00:21:43.397 00:21:43.397 ' 00:21:43.397 19:30:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:43.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.397 --rc genhtml_branch_coverage=1 00:21:43.397 --rc genhtml_function_coverage=1 00:21:43.397 --rc genhtml_legend=1 00:21:43.397 --rc geninfo_all_blocks=1 00:21:43.397 --rc geninfo_unexecuted_blocks=1 00:21:43.397 00:21:43.397 ' 00:21:43.397 19:30:41 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.397 19:30:41 -- nvmf/common.sh@7 -- # uname -s 00:21:43.397 19:30:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.397 19:30:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.397 19:30:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.397 19:30:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.397 19:30:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.397 19:30:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.397 19:30:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.397 19:30:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.397 19:30:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.397 19:30:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.397 19:30:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.397 19:30:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.397 19:30:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.397 19:30:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.397 19:30:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.397 19:30:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.397 19:30:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.397 19:30:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.397 19:30:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.397 19:30:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.397 19:30:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.397 19:30:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.397 19:30:41 -- paths/export.sh@5 -- # export PATH 00:21:43.397 19:30:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.397 19:30:41 -- nvmf/common.sh@46 -- # : 0 00:21:43.397 19:30:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:43.398 19:30:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:43.398 19:30:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:43.398 19:30:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.398 19:30:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.398 19:30:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:43.398 19:30:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:43.398 19:30:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:43.398 19:30:41 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:43.398 19:30:41 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:43.398 19:30:41 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:43.398 19:30:41 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:43.398 19:30:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:43.398 19:30:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.398 19:30:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:43.398 19:30:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:43.398 19:30:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:43.398 19:30:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.398 19:30:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.398 19:30:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.398 19:30:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:43.398 19:30:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:43.398 19:30:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:43.398 19:30:41 -- common/autotest_common.sh@10 -- # set +x 00:21:45.936 19:30:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:45.936 19:30:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:45.936 19:30:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:45.936 19:30:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:45.936 19:30:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:45.936 19:30:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:45.936 19:30:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:45.936 19:30:43 -- nvmf/common.sh@294 -- # net_devs=() 00:21:45.936 19:30:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:45.936 19:30:43 -- nvmf/common.sh@295 -- # e810=() 00:21:45.936 19:30:43 -- nvmf/common.sh@295 -- # local -ga e810 00:21:45.936 19:30:43 -- nvmf/common.sh@296 -- # x722=() 00:21:45.936 19:30:43 -- nvmf/common.sh@296 -- # local -ga x722 00:21:45.936 19:30:43 -- nvmf/common.sh@297 -- # mlx=() 00:21:45.936 19:30:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:45.936 19:30:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.936 19:30:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:45.936 19:30:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:45.936 19:30:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:45.936 19:30:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:45.936 19:30:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:45.936 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:45.936 19:30:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:45.936 19:30:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:45.936 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:45.936 19:30:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:45.936 19:30:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:45.936 19:30:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.936 19:30:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:45.936 19:30:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.936 19:30:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:45.936 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:45.936 19:30:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.936 19:30:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:45.936 19:30:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.936 19:30:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:45.936 19:30:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.936 19:30:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:45.936 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:45.936 19:30:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.936 19:30:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:45.936 19:30:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:45.936 19:30:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:45.936 19:30:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.936 19:30:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.936 19:30:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.936 19:30:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:45.936 19:30:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.936 19:30:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.936 19:30:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:45.936 19:30:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.936 19:30:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.936 19:30:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:45.936 19:30:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:45.936 19:30:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.936 19:30:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.936 19:30:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.936 19:30:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.936 19:30:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:45.936 19:30:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.936 19:30:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.936 19:30:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.936 19:30:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:45.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:21:45.936 00:21:45.936 --- 10.0.0.2 ping statistics --- 00:21:45.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.936 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:45.936 19:30:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:21:45.936 00:21:45.936 --- 10.0.0.1 ping statistics --- 00:21:45.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.936 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:45.936 19:30:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.936 19:30:43 -- nvmf/common.sh@410 -- # return 0 00:21:45.936 19:30:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:45.936 19:30:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.936 19:30:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:45.936 19:30:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:45.937 19:30:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.937 19:30:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:45.937 19:30:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:45.937 19:30:43 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:45.937 19:30:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:45.937 19:30:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:45.937 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:21:45.937 19:30:43 -- nvmf/common.sh@469 -- # nvmfpid=1245016 00:21:45.937 19:30:43 -- nvmf/common.sh@470 -- # waitforlisten 1245016 00:21:45.937 19:30:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:45.937 19:30:43 -- common/autotest_common.sh@829 -- # '[' -z 1245016 ']' 00:21:45.937 19:30:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.937 19:30:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.937 19:30:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.937 19:30:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.937 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:21:45.937 [2024-11-17 19:30:43.819241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:45.937 [2024-11-17 19:30:43.819344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.937 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.937 [2024-11-17 19:30:43.888215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.937 [2024-11-17 19:30:43.981837] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:45.937 [2024-11-17 19:30:43.982010] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.937 [2024-11-17 19:30:43.982030] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.937 [2024-11-17 19:30:43.982045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.937 [2024-11-17 19:30:43.982130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.937 [2024-11-17 19:30:43.982187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.937 [2024-11-17 19:30:43.982240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.937 [2024-11-17 19:30:43.982243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.876 19:30:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.876 19:30:44 -- common/autotest_common.sh@862 -- # return 0 00:21:46.876 19:30:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:46.876 19:30:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.876 19:30:44 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 [2024-11-17 19:30:44.804366] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:46.876 19:30:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.876 19:30:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 Malloc1 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 [2024-11-17 19:30:44.859636] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.876 19:30:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 Malloc2 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.876 19:30:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 Malloc3 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.876 19:30:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 Malloc4 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:46.876 19:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.876 19:30:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:46.876 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 Malloc5 00:21:46.876 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:46.876 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:46.876 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:46.876 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.876 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.876 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.876 19:30:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.877 19:30:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:46.877 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.877 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.877 Malloc6 00:21:46.877 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.877 19:30:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:46.877 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.877 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.877 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.877 19:30:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:46.877 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.877 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.877 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.877 19:30:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:46.877 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.877 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.877 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.877 19:30:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.877 19:30:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:46.877 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.877 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.877 Malloc7 00:21:46.877 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.877 19:30:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:46.877 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.877 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.877 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.877 19:30:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:46.877 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.877 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.877 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.877 19:30:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:46.877 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.877 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.877 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.877 19:30:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.877 19:30:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:46.877 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.877 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.137 Malloc8 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.138 19:30:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 Malloc9 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.138 19:30:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 Malloc10 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.138 19:30:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 Malloc11 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:47.138 19:30:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.138 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:21:47.138 19:30:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.138 19:30:45 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:47.138 19:30:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.138 19:30:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:48.076 19:30:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:48.076 19:30:46 -- common/autotest_common.sh@1187 -- # local i=0 00:21:48.076 19:30:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:48.076 19:30:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:48.076 19:30:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:49.974 19:30:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:49.974 19:30:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:49.974 19:30:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:21:49.974 19:30:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:49.974 19:30:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:49.974 19:30:48 -- common/autotest_common.sh@1197 -- # return 0 00:21:49.974 19:30:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.974 19:30:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:50.540 19:30:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:50.540 19:30:48 -- common/autotest_common.sh@1187 -- # local i=0 00:21:50.540 19:30:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.540 19:30:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:50.540 19:30:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:53.065 19:30:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:53.065 19:30:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:53.065 19:30:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:21:53.065 19:30:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:53.065 19:30:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:53.065 19:30:50 -- common/autotest_common.sh@1197 -- # return 0 00:21:53.065 19:30:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.065 19:30:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:53.323 19:30:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:53.323 19:30:51 -- common/autotest_common.sh@1187 -- # local i=0 00:21:53.323 19:30:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:53.323 19:30:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:53.323 19:30:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:55.222 19:30:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:55.223 19:30:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:55.223 19:30:53 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:21:55.223 19:30:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:55.223 19:30:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:55.223 19:30:53 -- common/autotest_common.sh@1197 -- # return 0 00:21:55.223 19:30:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.223 19:30:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:56.155 19:30:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:56.155 19:30:54 -- common/autotest_common.sh@1187 -- # local i=0 00:21:56.155 19:30:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:56.155 19:30:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:56.155 19:30:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:58.051 19:30:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:58.051 19:30:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:58.052 19:30:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:21:58.052 19:30:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:58.052 19:30:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:58.052 19:30:56 -- common/autotest_common.sh@1197 -- # return 0 00:21:58.052 19:30:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.052 19:30:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:59.001 19:30:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:59.001 19:30:56 -- common/autotest_common.sh@1187 -- # local i=0 00:21:59.001 19:30:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.001 19:30:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:59.001 19:30:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:00.932 19:30:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:00.932 19:30:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:00.932 19:30:58 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:22:00.932 19:30:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:00.932 19:30:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:00.932 19:30:58 -- common/autotest_common.sh@1197 -- # return 0 00:22:00.932 19:30:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.932 19:30:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:01.498 19:30:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:01.498 19:30:59 -- common/autotest_common.sh@1187 -- # local i=0 00:22:01.498 19:30:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.498 19:30:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:01.498 19:30:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:04.026 19:31:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:04.026 19:31:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:04.026 19:31:01 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:22:04.026 19:31:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:04.026 19:31:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:04.026 19:31:01 -- common/autotest_common.sh@1197 -- # return 0 00:22:04.026 19:31:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.026 19:31:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:04.591 19:31:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:04.591 19:31:02 -- common/autotest_common.sh@1187 -- # local i=0 00:22:04.591 19:31:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.591 19:31:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:04.591 19:31:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:06.488 19:31:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:06.488 19:31:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:06.488 19:31:04 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:22:06.488 19:31:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:06.488 19:31:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.488 19:31:04 -- common/autotest_common.sh@1197 -- # return 0 00:22:06.488 19:31:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.488 19:31:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:07.422 19:31:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:07.422 19:31:05 -- common/autotest_common.sh@1187 -- # local i=0 00:22:07.422 19:31:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.422 19:31:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:07.422 19:31:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:09.326 19:31:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:09.326 19:31:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:09.326 19:31:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:22:09.326 19:31:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:09.326 19:31:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:09.326 19:31:07 -- common/autotest_common.sh@1197 -- # return 0 00:22:09.326 19:31:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:09.326 19:31:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:10.261 19:31:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:10.261 19:31:08 -- common/autotest_common.sh@1187 -- # local i=0 00:22:10.261 19:31:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:10.261 19:31:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:10.261 19:31:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:12.158 19:31:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:12.158 19:31:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:12.158 19:31:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:22:12.158 19:31:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:12.158 19:31:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:12.158 19:31:10 -- common/autotest_common.sh@1197 -- # return 0 00:22:12.158 19:31:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.158 19:31:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:13.093 19:31:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:13.093 19:31:11 -- common/autotest_common.sh@1187 -- # local i=0 00:22:13.093 19:31:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.093 19:31:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:13.093 19:31:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:14.990 19:31:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:14.990 19:31:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:14.990 19:31:13 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:22:14.990 19:31:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:14.990 19:31:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:14.990 19:31:13 -- common/autotest_common.sh@1197 -- # return 0 00:22:14.990 19:31:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.990 19:31:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:15.924 19:31:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:15.924 19:31:14 -- common/autotest_common.sh@1187 -- # local i=0 00:22:15.924 19:31:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:15.924 19:31:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:15.924 19:31:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:17.822 19:31:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:17.822 19:31:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:17.822 19:31:16 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:22:17.822 19:31:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:17.822 19:31:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.822 19:31:16 -- common/autotest_common.sh@1197 -- # return 0 00:22:17.822 19:31:16 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:17.822 [global] 00:22:17.822 thread=1 00:22:17.822 invalidate=1 00:22:17.822 rw=read 00:22:17.822 time_based=1 00:22:17.822 runtime=10 00:22:17.822 ioengine=libaio 00:22:17.822 direct=1 00:22:17.822 bs=262144 00:22:17.822 iodepth=64 00:22:17.822 norandommap=1 00:22:17.822 numjobs=1 00:22:17.822 00:22:17.822 [job0] 00:22:17.822 filename=/dev/nvme0n1 00:22:17.822 [job1] 00:22:17.822 filename=/dev/nvme10n1 00:22:17.822 [job2] 00:22:17.822 filename=/dev/nvme1n1 00:22:17.822 [job3] 00:22:17.822 filename=/dev/nvme2n1 00:22:17.822 [job4] 00:22:17.822 filename=/dev/nvme3n1 00:22:17.822 [job5] 00:22:17.822 filename=/dev/nvme4n1 00:22:17.822 [job6] 00:22:17.822 filename=/dev/nvme5n1 00:22:17.822 [job7] 00:22:17.822 filename=/dev/nvme6n1 00:22:17.822 [job8] 00:22:17.822 filename=/dev/nvme7n1 00:22:17.822 [job9] 00:22:17.822 filename=/dev/nvme8n1 00:22:17.822 [job10] 00:22:17.822 filename=/dev/nvme9n1 00:22:18.080 Could not set queue depth (nvme0n1) 00:22:18.080 Could not set queue depth (nvme10n1) 00:22:18.080 Could not set queue depth (nvme1n1) 00:22:18.080 Could not set queue depth (nvme2n1) 00:22:18.080 Could not set queue depth (nvme3n1) 00:22:18.080 Could not set queue depth (nvme4n1) 00:22:18.080 Could not set queue depth (nvme5n1) 00:22:18.080 Could not set queue depth (nvme6n1) 00:22:18.080 Could not set queue depth (nvme7n1) 00:22:18.080 Could not set queue depth (nvme8n1) 00:22:18.080 Could not set queue depth (nvme9n1) 00:22:18.080 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:18.080 fio-3.35 00:22:18.080 Starting 11 threads 00:22:30.284 00:22:30.284 job0: (groupid=0, jobs=1): err= 0: pid=1249384: Sun Nov 17 19:31:26 2024 00:22:30.284 read: IOPS=891, BW=223MiB/s (234MB/s)(2246MiB/10078msec) 00:22:30.284 slat (usec): min=9, max=136868, avg=883.35, stdev=4305.15 00:22:30.284 clat (usec): min=1390, max=318405, avg=70862.88, stdev=47985.14 00:22:30.284 lat (usec): min=1410, max=356117, avg=71746.23, stdev=48661.40 00:22:30.284 clat percentiles (msec): 00:22:30.284 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 28], 20.00th=[ 34], 00:22:30.284 | 30.00th=[ 40], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 66], 00:22:30.284 | 70.00th=[ 79], 80.00th=[ 102], 90.00th=[ 142], 95.00th=[ 182], 00:22:30.284 | 99.00th=[ 222], 99.50th=[ 232], 99.90th=[ 251], 99.95th=[ 275], 00:22:30.284 | 99.99th=[ 317] 00:22:30.284 bw ( KiB/s): min=93184, max=434176, per=11.46%, avg=228332.30, stdev=101265.87, samples=20 00:22:30.284 iops : min= 364, max= 1696, avg=891.90, stdev=395.58, samples=20 00:22:30.284 lat (msec) : 2=0.06%, 4=0.35%, 10=1.00%, 20=4.13%, 50=34.12% 00:22:30.284 lat (msec) : 100=40.02%, 250=20.25%, 500=0.08% 00:22:30.284 cpu : usr=0.51%, sys=2.75%, ctx=1705, majf=0, minf=4097 00:22:30.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:30.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.284 issued rwts: total=8983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.284 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.284 job1: (groupid=0, jobs=1): err= 0: pid=1249385: Sun Nov 17 19:31:26 2024 00:22:30.284 read: IOPS=635, BW=159MiB/s (167MB/s)(1606MiB/10104msec) 00:22:30.284 slat (usec): min=9, max=108981, avg=1371.10, stdev=4522.98 00:22:30.284 clat (usec): min=1908, max=258573, avg=99243.98, stdev=53069.43 00:22:30.284 lat (usec): min=1931, max=280554, avg=100615.08, stdev=53841.18 00:22:30.284 clat percentiles (msec): 00:22:30.284 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 47], 00:22:30.284 | 30.00th=[ 62], 40.00th=[ 79], 50.00th=[ 93], 60.00th=[ 111], 00:22:30.284 | 70.00th=[ 131], 80.00th=[ 146], 90.00th=[ 176], 95.00th=[ 197], 00:22:30.284 | 99.00th=[ 230], 99.50th=[ 236], 99.90th=[ 249], 99.95th=[ 259], 00:22:30.284 | 99.99th=[ 259] 00:22:30.284 bw ( KiB/s): min=73216, max=378368, per=8.17%, avg=162761.95, stdev=78405.72, samples=20 00:22:30.284 iops : min= 286, max= 1478, avg=635.75, stdev=306.21, samples=20 00:22:30.284 lat (msec) : 2=0.03%, 4=0.20%, 10=0.97%, 20=1.32%, 50=18.97% 00:22:30.284 lat (msec) : 100=32.90%, 250=45.52%, 500=0.09% 00:22:30.284 cpu : usr=0.38%, sys=2.01%, ctx=1304, majf=0, minf=4097 00:22:30.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:30.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.284 issued rwts: total=6422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.284 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.284 job2: (groupid=0, jobs=1): err= 0: pid=1249386: Sun Nov 17 19:31:26 2024 00:22:30.284 read: IOPS=593, BW=148MiB/s (156MB/s)(1495MiB/10077msec) 00:22:30.284 slat (usec): min=10, max=75645, avg=1574.35, stdev=4837.47 00:22:30.284 clat (msec): min=3, max=267, avg=106.18, stdev=45.16 00:22:30.284 lat (msec): min=3, max=275, avg=107.75, stdev=45.95 00:22:30.284 clat percentiles (msec): 00:22:30.284 | 1.00th=[ 21], 5.00th=[ 44], 10.00th=[ 54], 20.00th=[ 70], 00:22:30.285 | 30.00th=[ 79], 40.00th=[ 87], 50.00th=[ 100], 60.00th=[ 115], 00:22:30.285 | 70.00th=[ 128], 80.00th=[ 140], 90.00th=[ 169], 95.00th=[ 197], 00:22:30.285 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 249], 99.95th=[ 253], 00:22:30.285 | 99.99th=[ 268] 00:22:30.285 bw ( KiB/s): min=75264, max=256512, per=7.60%, avg=151461.00, stdev=50235.60, samples=20 00:22:30.285 iops : min= 294, max= 1002, avg=591.60, stdev=196.24, samples=20 00:22:30.285 lat (msec) : 4=0.02%, 10=0.32%, 20=0.67%, 50=7.31%, 100=42.07% 00:22:30.285 lat (msec) : 250=49.53%, 500=0.08% 00:22:30.285 cpu : usr=0.43%, sys=2.15%, ctx=1159, majf=0, minf=4097 00:22:30.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:30.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.285 issued rwts: total=5980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.285 job3: (groupid=0, jobs=1): err= 0: pid=1249387: Sun Nov 17 19:31:26 2024 00:22:30.285 read: IOPS=595, BW=149MiB/s (156MB/s)(1504MiB/10101msec) 00:22:30.285 slat (usec): min=9, max=83535, avg=1372.30, stdev=4863.33 00:22:30.285 clat (msec): min=6, max=283, avg=106.01, stdev=43.78 00:22:30.285 lat (msec): min=6, max=285, avg=107.39, stdev=44.56 00:22:30.285 clat percentiles (msec): 00:22:30.285 | 1.00th=[ 15], 5.00th=[ 47], 10.00th=[ 61], 20.00th=[ 78], 00:22:30.285 | 30.00th=[ 83], 40.00th=[ 89], 50.00th=[ 97], 60.00th=[ 108], 00:22:30.285 | 70.00th=[ 122], 80.00th=[ 134], 90.00th=[ 167], 95.00th=[ 199], 00:22:30.285 | 99.00th=[ 234], 99.50th=[ 245], 99.90th=[ 264], 99.95th=[ 266], 00:22:30.285 | 99.99th=[ 284] 00:22:30.285 bw ( KiB/s): min=72192, max=237568, per=7.65%, avg=152369.00, stdev=48998.57, samples=20 00:22:30.285 iops : min= 282, max= 928, avg=595.15, stdev=191.40, samples=20 00:22:30.285 lat (msec) : 10=0.27%, 20=1.26%, 50=4.42%, 100=47.37%, 250=46.26% 00:22:30.285 lat (msec) : 500=0.42% 00:22:30.285 cpu : usr=0.27%, sys=1.72%, ctx=1237, majf=0, minf=4097 00:22:30.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:30.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.285 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.285 job4: (groupid=0, jobs=1): err= 0: pid=1249388: Sun Nov 17 19:31:26 2024 00:22:30.285 read: IOPS=795, BW=199MiB/s (209MB/s)(2010MiB/10104msec) 00:22:30.285 slat (usec): min=8, max=146447, avg=827.79, stdev=4472.30 00:22:30.285 clat (usec): min=584, max=348000, avg=79537.70, stdev=55046.46 00:22:30.285 lat (usec): min=603, max=348012, avg=80365.49, stdev=55815.96 00:22:30.285 clat percentiles (msec): 00:22:30.285 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 17], 20.00th=[ 29], 00:22:30.285 | 30.00th=[ 36], 40.00th=[ 54], 50.00th=[ 66], 60.00th=[ 87], 00:22:30.285 | 70.00th=[ 113], 80.00th=[ 132], 90.00th=[ 159], 95.00th=[ 176], 00:22:30.285 | 99.00th=[ 213], 99.50th=[ 222], 99.90th=[ 247], 99.95th=[ 313], 00:22:30.285 | 99.99th=[ 347] 00:22:30.285 bw ( KiB/s): min=76288, max=535552, per=10.25%, avg=204186.05, stdev=112656.81, samples=20 00:22:30.285 iops : min= 298, max= 2092, avg=797.60, stdev=440.06, samples=20 00:22:30.285 lat (usec) : 750=0.04%, 1000=0.11% 00:22:30.285 lat (msec) : 2=0.50%, 4=2.03%, 10=3.72%, 20=5.57%, 50=26.00% 00:22:30.285 lat (msec) : 100=25.77%, 250=36.18%, 500=0.09% 00:22:30.285 cpu : usr=0.37%, sys=2.16%, ctx=1694, majf=0, minf=4097 00:22:30.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:30.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.285 issued rwts: total=8040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.285 job5: (groupid=0, jobs=1): err= 0: pid=1249389: Sun Nov 17 19:31:26 2024 00:22:30.285 read: IOPS=744, BW=186MiB/s (195MB/s)(1864MiB/10020msec) 00:22:30.285 slat (usec): min=8, max=141169, avg=845.37, stdev=4555.06 00:22:30.285 clat (usec): min=1677, max=340476, avg=85119.06, stdev=50559.84 00:22:30.285 lat (usec): min=1711, max=344764, avg=85964.42, stdev=51246.04 00:22:30.285 clat percentiles (msec): 00:22:30.285 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 37], 00:22:30.285 | 30.00th=[ 51], 40.00th=[ 65], 50.00th=[ 82], 60.00th=[ 89], 00:22:30.285 | 70.00th=[ 108], 80.00th=[ 130], 90.00th=[ 159], 95.00th=[ 180], 00:22:30.285 | 99.00th=[ 218], 99.50th=[ 222], 99.90th=[ 257], 99.95th=[ 268], 00:22:30.285 | 99.99th=[ 342] 00:22:30.285 bw ( KiB/s): min=64000, max=423936, per=9.49%, avg=189217.70, stdev=85725.91, samples=20 00:22:30.285 iops : min= 250, max= 1656, avg=739.10, stdev=334.87, samples=20 00:22:30.285 lat (msec) : 2=0.11%, 4=0.67%, 10=1.84%, 20=2.71%, 50=24.53% 00:22:30.285 lat (msec) : 100=37.12%, 250=32.92%, 500=0.11% 00:22:30.285 cpu : usr=0.34%, sys=2.09%, ctx=1672, majf=0, minf=4097 00:22:30.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:30.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.285 issued rwts: total=7455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.285 job6: (groupid=0, jobs=1): err= 0: pid=1249390: Sun Nov 17 19:31:26 2024 00:22:30.285 read: IOPS=591, BW=148MiB/s (155MB/s)(1489MiB/10077msec) 00:22:30.285 slat (usec): min=9, max=90965, avg=1251.00, stdev=5077.61 00:22:30.285 clat (usec): min=1928, max=270290, avg=106930.30, stdev=51431.15 00:22:30.285 lat (usec): min=1946, max=270333, avg=108181.30, stdev=52218.73 00:22:30.285 clat percentiles (msec): 00:22:30.285 | 1.00th=[ 10], 5.00th=[ 31], 10.00th=[ 48], 20.00th=[ 57], 00:22:30.285 | 30.00th=[ 66], 40.00th=[ 88], 50.00th=[ 111], 60.00th=[ 125], 00:22:30.285 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 174], 95.00th=[ 203], 00:22:30.285 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 253], 99.95th=[ 257], 00:22:30.285 | 99.99th=[ 271] 00:22:30.285 bw ( KiB/s): min=78336, max=300032, per=7.57%, avg=150866.80, stdev=55813.75, samples=20 00:22:30.285 iops : min= 306, max= 1172, avg=589.30, stdev=218.00, samples=20 00:22:30.285 lat (msec) : 2=0.02%, 10=1.04%, 20=2.84%, 50=8.06%, 100=32.65% 00:22:30.285 lat (msec) : 250=55.30%, 500=0.10% 00:22:30.285 cpu : usr=0.31%, sys=1.79%, ctx=1288, majf=0, minf=4097 00:22:30.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:30.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.285 issued rwts: total=5957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.285 job7: (groupid=0, jobs=1): err= 0: pid=1249391: Sun Nov 17 19:31:26 2024 00:22:30.285 read: IOPS=922, BW=231MiB/s (242MB/s)(2308MiB/10013msec) 00:22:30.285 slat (usec): min=13, max=69497, avg=1001.98, stdev=3251.59 00:22:30.285 clat (msec): min=2, max=220, avg=68.36, stdev=38.25 00:22:30.285 lat (msec): min=2, max=233, avg=69.36, stdev=38.80 00:22:30.285 clat percentiles (msec): 00:22:30.285 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 30], 00:22:30.285 | 30.00th=[ 45], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 71], 00:22:30.285 | 70.00th=[ 82], 80.00th=[ 101], 90.00th=[ 126], 95.00th=[ 144], 00:22:30.285 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 199], 99.95th=[ 199], 00:22:30.285 | 99.99th=[ 222] 00:22:30.285 bw ( KiB/s): min=100352, max=608256, per=11.78%, avg=234729.60, stdev=124476.12, samples=20 00:22:30.285 iops : min= 392, max= 2376, avg=916.90, stdev=486.24, samples=20 00:22:30.285 lat (msec) : 4=0.03%, 10=0.22%, 20=0.75%, 50=34.24%, 100=44.50% 00:22:30.285 lat (msec) : 250=20.26% 00:22:30.285 cpu : usr=0.53%, sys=3.07%, ctx=1724, majf=0, minf=4097 00:22:30.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:30.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.286 issued rwts: total=9233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.286 job8: (groupid=0, jobs=1): err= 0: pid=1249394: Sun Nov 17 19:31:26 2024 00:22:30.286 read: IOPS=504, BW=126MiB/s (132MB/s)(1273MiB/10100msec) 00:22:30.286 slat (usec): min=10, max=99546, avg=1672.29, stdev=5667.81 00:22:30.286 clat (msec): min=10, max=260, avg=125.19, stdev=43.59 00:22:30.286 lat (msec): min=10, max=283, avg=126.86, stdev=44.32 00:22:30.286 clat percentiles (msec): 00:22:30.286 | 1.00th=[ 35], 5.00th=[ 55], 10.00th=[ 69], 20.00th=[ 89], 00:22:30.286 | 30.00th=[ 102], 40.00th=[ 114], 50.00th=[ 126], 60.00th=[ 136], 00:22:30.286 | 70.00th=[ 146], 80.00th=[ 161], 90.00th=[ 178], 95.00th=[ 207], 00:22:30.286 | 99.00th=[ 234], 99.50th=[ 245], 99.90th=[ 253], 99.95th=[ 262], 00:22:30.286 | 99.99th=[ 262] 00:22:30.286 bw ( KiB/s): min=75776, max=180736, per=6.46%, avg=128673.60, stdev=30895.91, samples=20 00:22:30.286 iops : min= 296, max= 706, avg=502.60, stdev=120.64, samples=20 00:22:30.286 lat (msec) : 20=0.18%, 50=3.91%, 100=25.16%, 250=70.56%, 500=0.20% 00:22:30.286 cpu : usr=0.25%, sys=1.84%, ctx=1087, majf=0, minf=3721 00:22:30.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:30.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.286 issued rwts: total=5091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.286 job9: (groupid=0, jobs=1): err= 0: pid=1249395: Sun Nov 17 19:31:26 2024 00:22:30.286 read: IOPS=833, BW=208MiB/s (218MB/s)(2099MiB/10075msec) 00:22:30.286 slat (usec): min=13, max=137436, avg=1060.90, stdev=3765.77 00:22:30.286 clat (msec): min=2, max=287, avg=75.69, stdev=42.84 00:22:30.286 lat (msec): min=2, max=287, avg=76.75, stdev=43.41 00:22:30.286 clat percentiles (msec): 00:22:30.286 | 1.00th=[ 11], 5.00th=[ 19], 10.00th=[ 27], 20.00th=[ 32], 00:22:30.286 | 30.00th=[ 46], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 80], 00:22:30.286 | 70.00th=[ 95], 80.00th=[ 117], 90.00th=[ 136], 95.00th=[ 153], 00:22:30.286 | 99.00th=[ 186], 99.50th=[ 220], 99.90th=[ 226], 99.95th=[ 230], 00:22:30.286 | 99.99th=[ 288] 00:22:30.286 bw ( KiB/s): min=108544, max=487936, per=10.70%, avg=213298.90, stdev=97196.44, samples=20 00:22:30.286 iops : min= 424, max= 1906, avg=833.15, stdev=379.71, samples=20 00:22:30.286 lat (msec) : 4=0.14%, 10=0.81%, 20=4.98%, 50=25.77%, 100=40.61% 00:22:30.286 lat (msec) : 250=27.67%, 500=0.02% 00:22:30.286 cpu : usr=0.50%, sys=2.65%, ctx=1575, majf=0, minf=4097 00:22:30.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:30.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.286 issued rwts: total=8395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.286 job10: (groupid=0, jobs=1): err= 0: pid=1249398: Sun Nov 17 19:31:26 2024 00:22:30.286 read: IOPS=701, BW=175MiB/s (184MB/s)(1772MiB/10102msec) 00:22:30.286 slat (usec): min=9, max=101841, avg=798.97, stdev=3707.29 00:22:30.286 clat (usec): min=696, max=270874, avg=90367.33, stdev=46340.83 00:22:30.286 lat (usec): min=724, max=270912, avg=91166.30, stdev=46702.74 00:22:30.286 clat percentiles (msec): 00:22:30.286 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 40], 20.00th=[ 55], 00:22:30.286 | 30.00th=[ 65], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 92], 00:22:30.286 | 70.00th=[ 107], 80.00th=[ 127], 90.00th=[ 153], 95.00th=[ 180], 00:22:30.286 | 99.00th=[ 224], 99.50th=[ 236], 99.90th=[ 266], 99.95th=[ 266], 00:22:30.286 | 99.99th=[ 271] 00:22:30.286 bw ( KiB/s): min=108544, max=252416, per=9.02%, avg=179767.60, stdev=44465.58, samples=20 00:22:30.286 iops : min= 424, max= 986, avg=702.20, stdev=173.68, samples=20 00:22:30.286 lat (usec) : 750=0.03%, 1000=0.01% 00:22:30.286 lat (msec) : 2=0.16%, 4=0.17%, 10=0.59%, 20=2.85%, 50=12.91% 00:22:30.286 lat (msec) : 100=50.13%, 250=32.77%, 500=0.38% 00:22:30.286 cpu : usr=0.14%, sys=1.90%, ctx=1690, majf=0, minf=4097 00:22:30.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:30.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:30.286 issued rwts: total=7086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:30.286 00:22:30.286 Run status group 0 (all jobs): 00:22:30.286 READ: bw=1946MiB/s (2041MB/s), 126MiB/s-231MiB/s (132MB/s-242MB/s), io=19.2GiB (20.6GB), run=10013-10104msec 00:22:30.286 00:22:30.286 Disk stats (read/write): 00:22:30.286 nvme0n1: ios=17648/0, merge=0/0, ticks=1240433/0, in_queue=1240433, util=97.29% 00:22:30.286 nvme10n1: ios=12685/0, merge=0/0, ticks=1231731/0, in_queue=1231731, util=97.49% 00:22:30.286 nvme1n1: ios=11768/0, merge=0/0, ticks=1235561/0, in_queue=1235561, util=97.75% 00:22:30.286 nvme2n1: ios=11774/0, merge=0/0, ticks=1228522/0, in_queue=1228522, util=97.87% 00:22:30.286 nvme3n1: ios=15912/0, merge=0/0, ticks=1238166/0, in_queue=1238166, util=97.95% 00:22:30.286 nvme4n1: ios=14662/0, merge=0/0, ticks=1244455/0, in_queue=1244455, util=98.27% 00:22:30.286 nvme5n1: ios=11717/0, merge=0/0, ticks=1237431/0, in_queue=1237431, util=98.42% 00:22:30.286 nvme6n1: ios=18204/0, merge=0/0, ticks=1237857/0, in_queue=1237857, util=98.51% 00:22:30.286 nvme7n1: ios=10017/0, merge=0/0, ticks=1230166/0, in_queue=1230166, util=98.94% 00:22:30.286 nvme8n1: ios=16556/0, merge=0/0, ticks=1236263/0, in_queue=1236263, util=99.11% 00:22:30.286 nvme9n1: ios=14018/0, merge=0/0, ticks=1242454/0, in_queue=1242454, util=99.24% 00:22:30.286 19:31:26 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:30.286 [global] 00:22:30.286 thread=1 00:22:30.286 invalidate=1 00:22:30.286 rw=randwrite 00:22:30.286 time_based=1 00:22:30.286 runtime=10 00:22:30.286 ioengine=libaio 00:22:30.286 direct=1 00:22:30.286 bs=262144 00:22:30.286 iodepth=64 00:22:30.286 norandommap=1 00:22:30.286 numjobs=1 00:22:30.286 00:22:30.286 [job0] 00:22:30.286 filename=/dev/nvme0n1 00:22:30.286 [job1] 00:22:30.286 filename=/dev/nvme10n1 00:22:30.286 [job2] 00:22:30.286 filename=/dev/nvme1n1 00:22:30.286 [job3] 00:22:30.286 filename=/dev/nvme2n1 00:22:30.286 [job4] 00:22:30.286 filename=/dev/nvme3n1 00:22:30.286 [job5] 00:22:30.286 filename=/dev/nvme4n1 00:22:30.286 [job6] 00:22:30.286 filename=/dev/nvme5n1 00:22:30.286 [job7] 00:22:30.286 filename=/dev/nvme6n1 00:22:30.286 [job8] 00:22:30.286 filename=/dev/nvme7n1 00:22:30.286 [job9] 00:22:30.286 filename=/dev/nvme8n1 00:22:30.286 [job10] 00:22:30.286 filename=/dev/nvme9n1 00:22:30.286 Could not set queue depth (nvme0n1) 00:22:30.286 Could not set queue depth (nvme10n1) 00:22:30.286 Could not set queue depth (nvme1n1) 00:22:30.286 Could not set queue depth (nvme2n1) 00:22:30.286 Could not set queue depth (nvme3n1) 00:22:30.286 Could not set queue depth (nvme4n1) 00:22:30.286 Could not set queue depth (nvme5n1) 00:22:30.286 Could not set queue depth (nvme6n1) 00:22:30.286 Could not set queue depth (nvme7n1) 00:22:30.286 Could not set queue depth (nvme8n1) 00:22:30.286 Could not set queue depth (nvme9n1) 00:22:30.286 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.286 fio-3.35 00:22:30.286 Starting 11 threads 00:22:40.259 00:22:40.259 job0: (groupid=0, jobs=1): err= 0: pid=1250595: Sun Nov 17 19:31:37 2024 00:22:40.259 write: IOPS=619, BW=155MiB/s (162MB/s)(1561MiB/10075msec); 0 zone resets 00:22:40.259 slat (usec): min=17, max=47254, avg=1155.63, stdev=3198.78 00:22:40.259 clat (usec): min=803, max=277054, avg=102091.44, stdev=65411.00 00:22:40.259 lat (usec): min=875, max=292730, avg=103247.07, stdev=66192.82 00:22:40.259 clat percentiles (msec): 00:22:40.259 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 25], 20.00th=[ 42], 00:22:40.259 | 30.00th=[ 52], 40.00th=[ 73], 50.00th=[ 90], 60.00th=[ 112], 00:22:40.259 | 70.00th=[ 138], 80.00th=[ 174], 90.00th=[ 194], 95.00th=[ 218], 00:22:40.259 | 99.00th=[ 255], 99.50th=[ 266], 99.90th=[ 275], 99.95th=[ 279], 00:22:40.259 | 99.99th=[ 279] 00:22:40.259 bw ( KiB/s): min=75264, max=387584, per=10.87%, avg=158182.40, stdev=71204.43, samples=20 00:22:40.259 iops : min= 294, max= 1514, avg=617.90, stdev=278.14, samples=20 00:22:40.259 lat (usec) : 1000=0.10% 00:22:40.259 lat (msec) : 2=0.22%, 4=1.12%, 10=3.94%, 20=3.62%, 50=20.14% 00:22:40.259 lat (msec) : 100=25.31%, 250=44.33%, 500=1.22% 00:22:40.259 cpu : usr=1.80%, sys=2.27%, ctx=3250, majf=0, minf=1 00:22:40.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:40.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.259 issued rwts: total=0,6242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.259 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.259 job1: (groupid=0, jobs=1): err= 0: pid=1250607: Sun Nov 17 19:31:37 2024 00:22:40.259 write: IOPS=445, BW=111MiB/s (117MB/s)(1141MiB/10240msec); 0 zone resets 00:22:40.259 slat (usec): min=24, max=157812, avg=1508.18, stdev=4962.96 00:22:40.259 clat (usec): min=909, max=532163, avg=141965.19, stdev=80481.52 00:22:40.259 lat (usec): min=994, max=532194, avg=143473.37, stdev=81333.96 00:22:40.259 clat percentiles (msec): 00:22:40.259 | 1.00th=[ 8], 5.00th=[ 35], 10.00th=[ 64], 20.00th=[ 84], 00:22:40.259 | 30.00th=[ 88], 40.00th=[ 102], 50.00th=[ 121], 60.00th=[ 146], 00:22:40.259 | 70.00th=[ 167], 80.00th=[ 207], 90.00th=[ 271], 95.00th=[ 296], 00:22:40.259 | 99.00th=[ 359], 99.50th=[ 414], 99.90th=[ 514], 99.95th=[ 514], 00:22:40.259 | 99.99th=[ 531] 00:22:40.259 bw ( KiB/s): min=51200, max=190464, per=7.91%, avg=115181.05, stdev=46067.97, samples=20 00:22:40.259 iops : min= 200, max= 744, avg=449.90, stdev=179.98, samples=20 00:22:40.259 lat (usec) : 1000=0.04% 00:22:40.259 lat (msec) : 2=0.18%, 4=0.33%, 10=0.59%, 20=0.90%, 50=5.81% 00:22:40.259 lat (msec) : 100=31.40%, 250=47.32%, 500=13.30%, 750=0.13% 00:22:40.259 cpu : usr=1.46%, sys=1.74%, ctx=2324, majf=0, minf=1 00:22:40.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:40.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.259 issued rwts: total=0,4563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.259 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.259 job2: (groupid=0, jobs=1): err= 0: pid=1250608: Sun Nov 17 19:31:37 2024 00:22:40.259 write: IOPS=638, BW=160MiB/s (167MB/s)(1605MiB/10045msec); 0 zone resets 00:22:40.259 slat (usec): min=12, max=103124, avg=1218.76, stdev=3605.41 00:22:40.259 clat (usec): min=1194, max=334990, avg=98923.00, stdev=73960.82 00:22:40.259 lat (usec): min=1479, max=335033, avg=100141.77, stdev=74926.53 00:22:40.259 clat percentiles (msec): 00:22:40.259 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 31], 20.00th=[ 41], 00:22:40.259 | 30.00th=[ 47], 40.00th=[ 59], 50.00th=[ 75], 60.00th=[ 86], 00:22:40.259 | 70.00th=[ 134], 80.00th=[ 165], 90.00th=[ 199], 95.00th=[ 264], 00:22:40.259 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 334], 99.95th=[ 334], 00:22:40.259 | 99.99th=[ 334] 00:22:40.259 bw ( KiB/s): min=51200, max=364032, per=11.17%, avg=162688.00, stdev=95243.86, samples=20 00:22:40.259 iops : min= 200, max= 1422, avg=635.50, stdev=372.05, samples=20 00:22:40.259 lat (msec) : 2=0.06%, 4=0.47%, 10=2.35%, 20=3.74%, 50=27.73% 00:22:40.259 lat (msec) : 100=28.87%, 250=30.99%, 500=5.78% 00:22:40.260 cpu : usr=2.38%, sys=2.24%, ctx=3136, majf=0, minf=1 00:22:40.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:40.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.260 issued rwts: total=0,6418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.260 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.260 job3: (groupid=0, jobs=1): err= 0: pid=1250609: Sun Nov 17 19:31:37 2024 00:22:40.260 write: IOPS=713, BW=178MiB/s (187MB/s)(1827MiB/10240msec); 0 zone resets 00:22:40.260 slat (usec): min=18, max=179395, avg=1130.75, stdev=4090.16 00:22:40.260 clat (usec): min=1480, max=555168, avg=88481.47, stdev=77292.98 00:22:40.260 lat (msec): min=2, max=555, avg=89.61, stdev=78.29 00:22:40.260 clat percentiles (msec): 00:22:40.260 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 41], 20.00th=[ 44], 00:22:40.260 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 54], 00:22:40.260 | 70.00th=[ 88], 80.00th=[ 153], 90.00th=[ 218], 95.00th=[ 249], 00:22:40.260 | 99.00th=[ 317], 99.50th=[ 351], 99.90th=[ 514], 99.95th=[ 535], 00:22:40.260 | 99.99th=[ 558] 00:22:40.260 bw ( KiB/s): min=53248, max=371712, per=12.73%, avg=185395.20, stdev=117139.15, samples=20 00:22:40.260 iops : min= 208, max= 1452, avg=724.20, stdev=457.57, samples=20 00:22:40.260 lat (msec) : 2=0.01%, 4=0.10%, 10=0.77%, 20=2.74%, 50=53.13% 00:22:40.260 lat (msec) : 100=16.42%, 250=21.93%, 500=4.76%, 750=0.14% 00:22:40.260 cpu : usr=2.44%, sys=2.40%, ctx=2962, majf=0, minf=1 00:22:40.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:40.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.260 issued rwts: total=0,7306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.260 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.260 job4: (groupid=0, jobs=1): err= 0: pid=1250610: Sun Nov 17 19:31:37 2024 00:22:40.260 write: IOPS=493, BW=123MiB/s (129MB/s)(1244MiB/10077msec); 0 zone resets 00:22:40.260 slat (usec): min=17, max=73322, avg=1744.47, stdev=4291.56 00:22:40.260 clat (usec): min=1219, max=349558, avg=127810.45, stdev=74367.12 00:22:40.260 lat (usec): min=1931, max=349601, avg=129554.92, stdev=75460.98 00:22:40.260 clat percentiles (msec): 00:22:40.260 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 39], 20.00th=[ 63], 00:22:40.260 | 30.00th=[ 88], 40.00th=[ 104], 50.00th=[ 122], 60.00th=[ 136], 00:22:40.260 | 70.00th=[ 146], 80.00th=[ 182], 90.00th=[ 230], 95.00th=[ 284], 00:22:40.260 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 347], 99.95th=[ 351], 00:22:40.260 | 99.99th=[ 351] 00:22:40.260 bw ( KiB/s): min=51200, max=273920, per=8.64%, avg=125772.80, stdev=60006.35, samples=20 00:22:40.260 iops : min= 200, max= 1070, avg=491.30, stdev=234.40, samples=20 00:22:40.260 lat (msec) : 2=0.08%, 4=0.44%, 10=2.23%, 20=3.46%, 50=7.40% 00:22:40.260 lat (msec) : 100=24.38%, 250=53.98%, 500=8.04% 00:22:40.260 cpu : usr=1.66%, sys=1.66%, ctx=2152, majf=0, minf=1 00:22:40.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:40.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.260 issued rwts: total=0,4976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.260 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.260 job5: (groupid=0, jobs=1): err= 0: pid=1250611: Sun Nov 17 19:31:37 2024 00:22:40.260 write: IOPS=449, BW=112MiB/s (118MB/s)(1151MiB/10236msec); 0 zone resets 00:22:40.260 slat (usec): min=17, max=97331, avg=1927.37, stdev=4706.06 00:22:40.260 clat (usec): min=1456, max=538301, avg=140260.38, stdev=78945.84 00:22:40.260 lat (msec): min=2, max=538, avg=142.19, stdev=79.95 00:22:40.260 clat percentiles (msec): 00:22:40.260 | 1.00th=[ 9], 5.00th=[ 45], 10.00th=[ 61], 20.00th=[ 87], 00:22:40.260 | 30.00th=[ 97], 40.00th=[ 118], 50.00th=[ 132], 60.00th=[ 140], 00:22:40.260 | 70.00th=[ 146], 80.00th=[ 167], 90.00th=[ 259], 95.00th=[ 334], 00:22:40.260 | 99.00th=[ 372], 99.50th=[ 418], 99.90th=[ 518], 99.95th=[ 518], 00:22:40.260 | 99.99th=[ 542] 00:22:40.260 bw ( KiB/s): min=45056, max=174080, per=7.98%, avg=116249.60, stdev=43836.19, samples=20 00:22:40.260 iops : min= 176, max= 680, avg=454.10, stdev=171.24, samples=20 00:22:40.260 lat (msec) : 2=0.02%, 4=0.26%, 10=1.15%, 20=0.91%, 50=6.06% 00:22:40.260 lat (msec) : 100=23.13%, 250=57.62%, 500=10.71%, 750=0.13% 00:22:40.260 cpu : usr=1.42%, sys=1.65%, ctx=1684, majf=0, minf=1 00:22:40.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:40.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.260 issued rwts: total=0,4604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.260 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.260 job6: (groupid=0, jobs=1): err= 0: pid=1250614: Sun Nov 17 19:31:37 2024 00:22:40.260 write: IOPS=445, BW=111MiB/s (117MB/s)(1141MiB/10240msec); 0 zone resets 00:22:40.260 slat (usec): min=18, max=107976, avg=1503.71, stdev=5039.17 00:22:40.260 clat (msec): min=2, max=610, avg=142.06, stdev=87.73 00:22:40.260 lat (msec): min=3, max=610, avg=143.57, stdev=88.90 00:22:40.260 clat percentiles (msec): 00:22:40.260 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 56], 00:22:40.260 | 30.00th=[ 81], 40.00th=[ 118], 50.00th=[ 142], 60.00th=[ 167], 00:22:40.260 | 70.00th=[ 182], 80.00th=[ 203], 90.00th=[ 257], 95.00th=[ 309], 00:22:40.260 | 99.00th=[ 384], 99.50th=[ 414], 99.90th=[ 518], 99.95th=[ 518], 00:22:40.260 | 99.99th=[ 609] 00:22:40.260 bw ( KiB/s): min=47104, max=227840, per=7.91%, avg=115148.80, stdev=41103.88, samples=20 00:22:40.260 iops : min= 184, max= 890, avg=449.80, stdev=160.56, samples=20 00:22:40.260 lat (msec) : 4=0.07%, 10=1.89%, 20=3.16%, 50=13.44%, 100=16.40% 00:22:40.260 lat (msec) : 250=53.92%, 500=11.00%, 750=0.13% 00:22:40.260 cpu : usr=1.42%, sys=1.70%, ctx=2746, majf=0, minf=1 00:22:40.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:40.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.260 issued rwts: total=0,4562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.260 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.260 job7: (groupid=0, jobs=1): err= 0: pid=1250615: Sun Nov 17 19:31:37 2024 00:22:40.260 write: IOPS=510, BW=128MiB/s (134MB/s)(1307MiB/10238msec); 0 zone resets 00:22:40.260 slat (usec): min=16, max=90751, avg=1305.30, stdev=4234.00 00:22:40.260 clat (usec): min=1061, max=552220, avg=123940.10, stdev=79456.47 00:22:40.260 lat (usec): min=1092, max=552255, avg=125245.40, stdev=80370.79 00:22:40.260 clat percentiles (msec): 00:22:40.260 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 41], 20.00th=[ 69], 00:22:40.260 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 97], 60.00th=[ 114], 00:22:40.260 | 70.00th=[ 148], 80.00th=[ 197], 90.00th=[ 236], 95.00th=[ 266], 00:22:40.260 | 99.00th=[ 351], 99.50th=[ 414], 99.90th=[ 531], 99.95th=[ 531], 00:22:40.260 | 99.99th=[ 550] 00:22:40.260 bw ( KiB/s): min=47616, max=262656, per=9.08%, avg=132224.00, stdev=54413.38, samples=20 00:22:40.260 iops : min= 186, max= 1026, avg=516.50, stdev=212.55, samples=20 00:22:40.260 lat (msec) : 2=0.10%, 4=0.29%, 10=0.96%, 20=2.37%, 50=10.25% 00:22:40.260 lat (msec) : 100=37.85%, 250=40.97%, 500=7.02%, 750=0.19% 00:22:40.260 cpu : usr=1.33%, sys=1.86%, ctx=2799, majf=0, minf=1 00:22:40.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:40.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.260 issued rwts: total=0,5228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.260 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.260 job8: (groupid=0, jobs=1): err= 0: pid=1250617: Sun Nov 17 19:31:37 2024 00:22:40.260 write: IOPS=461, BW=115MiB/s (121MB/s)(1180MiB/10237msec); 0 zone resets 00:22:40.260 slat (usec): min=20, max=74228, avg=1621.04, stdev=4328.81 00:22:40.260 clat (usec): min=1196, max=542009, avg=137089.03, stdev=82130.30 00:22:40.260 lat (usec): min=1278, max=542057, avg=138710.07, stdev=83296.94 00:22:40.260 clat percentiles (msec): 00:22:40.260 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 29], 20.00th=[ 58], 00:22:40.260 | 30.00th=[ 92], 40.00th=[ 130], 50.00th=[ 140], 60.00th=[ 146], 00:22:40.260 | 70.00th=[ 157], 80.00th=[ 192], 90.00th=[ 266], 95.00th=[ 288], 00:22:40.260 | 99.00th=[ 334], 99.50th=[ 414], 99.90th=[ 523], 99.95th=[ 523], 00:22:40.260 | 99.99th=[ 542] 00:22:40.260 bw ( KiB/s): min=53760, max=247808, per=8.19%, avg=119229.25, stdev=47461.13, samples=20 00:22:40.260 iops : min= 210, max= 968, avg=465.70, stdev=185.41, samples=20 00:22:40.260 lat (msec) : 2=0.15%, 4=0.70%, 10=1.78%, 20=3.54%, 50=11.40% 00:22:40.260 lat (msec) : 100=14.72%, 250=56.19%, 500=11.31%, 750=0.21% 00:22:40.260 cpu : usr=1.58%, sys=1.56%, ctx=2571, majf=0, minf=1 00:22:40.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:40.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.260 issued rwts: total=0,4720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.260 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.260 job9: (groupid=0, jobs=1): err= 0: pid=1250618: Sun Nov 17 19:31:37 2024 00:22:40.260 write: IOPS=444, BW=111MiB/s (116MB/s)(1138MiB/10240msec); 0 zone resets 00:22:40.260 slat (usec): min=16, max=86643, avg=1472.97, stdev=4701.40 00:22:40.260 clat (usec): min=860, max=542521, avg=142459.16, stdev=91225.03 00:22:40.260 lat (usec): min=893, max=560386, avg=143932.13, stdev=92381.25 00:22:40.260 clat percentiles (msec): 00:22:40.261 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 16], 20.00th=[ 37], 00:22:40.261 | 30.00th=[ 82], 40.00th=[ 128], 50.00th=[ 155], 60.00th=[ 178], 00:22:40.261 | 70.00th=[ 190], 80.00th=[ 220], 90.00th=[ 259], 95.00th=[ 279], 00:22:40.261 | 99.00th=[ 355], 99.50th=[ 443], 99.90th=[ 523], 99.95th=[ 542], 00:22:40.261 | 99.99th=[ 542] 00:22:40.261 bw ( KiB/s): min=59392, max=303104, per=7.89%, avg=114849.75, stdev=54467.49, samples=20 00:22:40.261 iops : min= 232, max= 1184, avg=448.60, stdev=212.78, samples=20 00:22:40.261 lat (usec) : 1000=0.04% 00:22:40.261 lat (msec) : 2=0.55%, 4=1.03%, 10=4.40%, 20=7.16%, 50=9.98% 00:22:40.261 lat (msec) : 100=11.05%, 250=53.56%, 500=11.96%, 750=0.26% 00:22:40.261 cpu : usr=1.50%, sys=1.55%, ctx=2876, majf=0, minf=1 00:22:40.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:40.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.261 issued rwts: total=0,4550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.261 job10: (groupid=0, jobs=1): err= 0: pid=1250619: Sun Nov 17 19:31:37 2024 00:22:40.261 write: IOPS=494, BW=124MiB/s (130MB/s)(1267MiB/10239msec); 0 zone resets 00:22:40.261 slat (usec): min=21, max=115368, avg=1426.56, stdev=4612.41 00:22:40.261 clat (usec): min=974, max=584353, avg=127828.16, stdev=85874.68 00:22:40.261 lat (usec): min=1002, max=584393, avg=129254.72, stdev=86922.31 00:22:40.261 clat percentiles (msec): 00:22:40.261 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 33], 20.00th=[ 43], 00:22:40.261 | 30.00th=[ 70], 40.00th=[ 93], 50.00th=[ 122], 60.00th=[ 144], 00:22:40.261 | 70.00th=[ 167], 80.00th=[ 186], 90.00th=[ 249], 95.00th=[ 284], 00:22:40.261 | 99.00th=[ 376], 99.50th=[ 468], 99.90th=[ 567], 99.95th=[ 567], 00:22:40.261 | 99.99th=[ 584] 00:22:40.261 bw ( KiB/s): min=39424, max=309760, per=8.80%, avg=128076.80, stdev=64896.30, samples=20 00:22:40.261 iops : min= 154, max= 1210, avg=500.30, stdev=253.50, samples=20 00:22:40.261 lat (usec) : 1000=0.02% 00:22:40.261 lat (msec) : 2=0.20%, 4=0.26%, 10=2.05%, 20=3.43%, 50=17.65% 00:22:40.261 lat (msec) : 100=18.52%, 250=48.03%, 500=9.49%, 750=0.36% 00:22:40.261 cpu : usr=1.60%, sys=1.79%, ctx=2660, majf=0, minf=1 00:22:40.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:40.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:40.261 issued rwts: total=0,5066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:40.261 00:22:40.261 Run status group 0 (all jobs): 00:22:40.261 WRITE: bw=1422MiB/s (1491MB/s), 111MiB/s-178MiB/s (116MB/s-187MB/s), io=14.2GiB (15.3GB), run=10045-10240msec 00:22:40.261 00:22:40.261 Disk stats (read/write): 00:22:40.261 nvme0n1: ios=49/12254, merge=0/0, ticks=13/1221546, in_queue=1221559, util=97.22% 00:22:40.261 nvme10n1: ios=35/9085, merge=0/0, ticks=1407/1226288, in_queue=1227695, util=100.00% 00:22:40.261 nvme1n1: ios=0/12530, merge=0/0, ticks=0/1221602, in_queue=1221602, util=97.61% 00:22:40.261 nvme2n1: ios=44/14570, merge=0/0, ticks=2122/1222933, in_queue=1225055, util=100.00% 00:22:40.261 nvme3n1: ios=5/9715, merge=0/0, ticks=5/1213861, in_queue=1213866, util=97.81% 00:22:40.261 nvme4n1: ios=0/9171, merge=0/0, ticks=0/1234487, in_queue=1234487, util=98.18% 00:22:40.261 nvme5n1: ios=0/9083, merge=0/0, ticks=0/1243460, in_queue=1243460, util=98.34% 00:22:40.261 nvme6n1: ios=0/10418, merge=0/0, ticks=0/1244571, in_queue=1244571, util=98.45% 00:22:40.261 nvme7n1: ios=0/9402, merge=0/0, ticks=0/1240518, in_queue=1240518, util=98.82% 00:22:40.261 nvme8n1: ios=0/9059, merge=0/0, ticks=0/1246237, in_queue=1246237, util=99.00% 00:22:40.261 nvme9n1: ios=0/10093, merge=0/0, ticks=0/1241404, in_queue=1241404, util=99.10% 00:22:40.261 19:31:37 -- target/multiconnection.sh@36 -- # sync 00:22:40.261 19:31:37 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:40.261 19:31:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:40.261 19:31:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:40.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:40.261 19:31:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:40.261 19:31:38 -- common/autotest_common.sh@1208 -- # local i=0 00:22:40.261 19:31:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:40.261 19:31:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:22:40.261 19:31:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:40.261 19:31:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:22:40.261 19:31:38 -- common/autotest_common.sh@1220 -- # return 0 00:22:40.261 19:31:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.261 19:31:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.261 19:31:38 -- common/autotest_common.sh@10 -- # set +x 00:22:40.261 19:31:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.261 19:31:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:40.261 19:31:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:40.261 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:40.261 19:31:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:40.261 19:31:38 -- common/autotest_common.sh@1208 -- # local i=0 00:22:40.261 19:31:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:40.261 19:31:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:22:40.261 19:31:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:40.261 19:31:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:22:40.261 19:31:38 -- common/autotest_common.sh@1220 -- # return 0 00:22:40.261 19:31:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:40.261 19:31:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.261 19:31:38 -- common/autotest_common.sh@10 -- # set +x 00:22:40.261 19:31:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.261 19:31:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:40.261 19:31:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:40.519 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:40.519 19:31:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:40.519 19:31:38 -- common/autotest_common.sh@1208 -- # local i=0 00:22:40.519 19:31:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:40.519 19:31:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:22:40.520 19:31:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:40.520 19:31:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:22:40.520 19:31:38 -- common/autotest_common.sh@1220 -- # return 0 00:22:40.520 19:31:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:40.520 19:31:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.520 19:31:38 -- common/autotest_common.sh@10 -- # set +x 00:22:40.520 19:31:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.520 19:31:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:40.520 19:31:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:41.083 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:41.083 19:31:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:41.083 19:31:39 -- common/autotest_common.sh@1208 -- # local i=0 00:22:41.083 19:31:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:41.083 19:31:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:22:41.083 19:31:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:41.083 19:31:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:22:41.083 19:31:39 -- common/autotest_common.sh@1220 -- # return 0 00:22:41.083 19:31:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:41.083 19:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.083 19:31:39 -- common/autotest_common.sh@10 -- # set +x 00:22:41.083 19:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.083 19:31:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:41.083 19:31:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:41.083 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:41.083 19:31:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:41.083 19:31:39 -- common/autotest_common.sh@1208 -- # local i=0 00:22:41.083 19:31:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:41.083 19:31:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:22:41.083 19:31:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:41.083 19:31:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:22:41.083 19:31:39 -- common/autotest_common.sh@1220 -- # return 0 00:22:41.083 19:31:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:41.083 19:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.083 19:31:39 -- common/autotest_common.sh@10 -- # set +x 00:22:41.083 19:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.083 19:31:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:41.084 19:31:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:41.340 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:41.340 19:31:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:41.340 19:31:39 -- common/autotest_common.sh@1208 -- # local i=0 00:22:41.340 19:31:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:41.340 19:31:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:22:41.340 19:31:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:41.340 19:31:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:22:41.340 19:31:39 -- common/autotest_common.sh@1220 -- # return 0 00:22:41.340 19:31:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:41.340 19:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.340 19:31:39 -- common/autotest_common.sh@10 -- # set +x 00:22:41.340 19:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.340 19:31:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:41.340 19:31:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:41.340 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:41.340 19:31:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:41.340 19:31:39 -- common/autotest_common.sh@1208 -- # local i=0 00:22:41.340 19:31:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:41.340 19:31:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:22:41.340 19:31:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:41.340 19:31:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:41.340 19:31:39 -- common/autotest_common.sh@1220 -- # return 0 00:22:41.340 19:31:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:41.340 19:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.340 19:31:39 -- common/autotest_common.sh@10 -- # set +x 00:22:41.340 19:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.340 19:31:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:41.340 19:31:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:41.598 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:41.598 19:31:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:41.598 19:31:39 -- common/autotest_common.sh@1208 -- # local i=0 00:22:41.598 19:31:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:41.598 19:31:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:22:41.598 19:31:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:41.598 19:31:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:41.598 19:31:39 -- common/autotest_common.sh@1220 -- # return 0 00:22:41.598 19:31:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:41.598 19:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.598 19:31:39 -- common/autotest_common.sh@10 -- # set +x 00:22:41.598 19:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.598 19:31:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:41.598 19:31:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:41.856 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:41.856 19:31:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:41.856 19:31:39 -- common/autotest_common.sh@1208 -- # local i=0 00:22:41.856 19:31:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:41.856 19:31:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:22:41.856 19:31:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:41.856 19:31:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:41.856 19:31:39 -- common/autotest_common.sh@1220 -- # return 0 00:22:41.856 19:31:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:41.856 19:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.856 19:31:39 -- common/autotest_common.sh@10 -- # set +x 00:22:41.856 19:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.856 19:31:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:41.856 19:31:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:41.856 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:41.856 19:31:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:41.856 19:31:40 -- common/autotest_common.sh@1208 -- # local i=0 00:22:41.856 19:31:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:41.856 19:31:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:22:41.856 19:31:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:41.856 19:31:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:42.114 19:31:40 -- common/autotest_common.sh@1220 -- # return 0 00:22:42.114 19:31:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:42.114 19:31:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.114 19:31:40 -- common/autotest_common.sh@10 -- # set +x 00:22:42.114 19:31:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.114 19:31:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.114 19:31:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:42.114 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:42.114 19:31:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:42.114 19:31:40 -- common/autotest_common.sh@1208 -- # local i=0 00:22:42.114 19:31:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:42.114 19:31:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:22:42.114 19:31:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:42.114 19:31:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:42.114 19:31:40 -- common/autotest_common.sh@1220 -- # return 0 00:22:42.114 19:31:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:42.114 19:31:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.114 19:31:40 -- common/autotest_common.sh@10 -- # set +x 00:22:42.114 19:31:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.114 19:31:40 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:42.114 19:31:40 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:42.114 19:31:40 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:42.114 19:31:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:42.114 19:31:40 -- nvmf/common.sh@116 -- # sync 00:22:42.114 19:31:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:42.114 19:31:40 -- nvmf/common.sh@119 -- # set +e 00:22:42.114 19:31:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:42.114 19:31:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:42.114 rmmod nvme_tcp 00:22:42.114 rmmod nvme_fabrics 00:22:42.114 rmmod nvme_keyring 00:22:42.114 19:31:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:42.114 19:31:40 -- nvmf/common.sh@123 -- # set -e 00:22:42.114 19:31:40 -- nvmf/common.sh@124 -- # return 0 00:22:42.114 19:31:40 -- nvmf/common.sh@477 -- # '[' -n 1245016 ']' 00:22:42.114 19:31:40 -- nvmf/common.sh@478 -- # killprocess 1245016 00:22:42.114 19:31:40 -- common/autotest_common.sh@936 -- # '[' -z 1245016 ']' 00:22:42.114 19:31:40 -- common/autotest_common.sh@940 -- # kill -0 1245016 00:22:42.114 19:31:40 -- common/autotest_common.sh@941 -- # uname 00:22:42.114 19:31:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.114 19:31:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1245016 00:22:42.114 19:31:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:42.114 19:31:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:42.371 19:31:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1245016' 00:22:42.371 killing process with pid 1245016 00:22:42.371 19:31:40 -- common/autotest_common.sh@955 -- # kill 1245016 00:22:42.371 19:31:40 -- common/autotest_common.sh@960 -- # wait 1245016 00:22:42.630 19:31:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:42.630 19:31:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:42.630 19:31:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:42.630 19:31:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.630 19:31:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:42.630 19:31:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.630 19:31:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.630 19:31:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.165 19:31:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:45.165 00:22:45.165 real 1m1.485s 00:22:45.165 user 3m28.514s 00:22:45.165 sys 0m25.178s 00:22:45.165 19:31:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:45.165 19:31:42 -- common/autotest_common.sh@10 -- # set +x 00:22:45.165 ************************************ 00:22:45.165 END TEST nvmf_multiconnection 00:22:45.165 ************************************ 00:22:45.165 19:31:42 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:45.165 19:31:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:45.165 19:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:45.165 19:31:42 -- common/autotest_common.sh@10 -- # set +x 00:22:45.165 ************************************ 00:22:45.165 START TEST nvmf_initiator_timeout 00:22:45.165 ************************************ 00:22:45.165 19:31:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:45.165 * Looking for test storage... 00:22:45.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.165 19:31:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:45.165 19:31:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:45.165 19:31:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:45.165 19:31:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:45.165 19:31:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:45.165 19:31:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:45.166 19:31:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:45.166 19:31:43 -- scripts/common.sh@335 -- # IFS=.-: 00:22:45.166 19:31:43 -- scripts/common.sh@335 -- # read -ra ver1 00:22:45.166 19:31:43 -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.166 19:31:43 -- scripts/common.sh@336 -- # read -ra ver2 00:22:45.166 19:31:43 -- scripts/common.sh@337 -- # local 'op=<' 00:22:45.166 19:31:43 -- scripts/common.sh@339 -- # ver1_l=2 00:22:45.166 19:31:43 -- scripts/common.sh@340 -- # ver2_l=1 00:22:45.166 19:31:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:45.166 19:31:43 -- scripts/common.sh@343 -- # case "$op" in 00:22:45.166 19:31:43 -- scripts/common.sh@344 -- # : 1 00:22:45.166 19:31:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:45.166 19:31:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.166 19:31:43 -- scripts/common.sh@364 -- # decimal 1 00:22:45.166 19:31:43 -- scripts/common.sh@352 -- # local d=1 00:22:45.166 19:31:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.166 19:31:43 -- scripts/common.sh@354 -- # echo 1 00:22:45.166 19:31:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:45.166 19:31:43 -- scripts/common.sh@365 -- # decimal 2 00:22:45.166 19:31:43 -- scripts/common.sh@352 -- # local d=2 00:22:45.166 19:31:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.166 19:31:43 -- scripts/common.sh@354 -- # echo 2 00:22:45.166 19:31:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:45.166 19:31:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:45.166 19:31:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:45.166 19:31:43 -- scripts/common.sh@367 -- # return 0 00:22:45.166 19:31:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.166 19:31:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:45.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.166 --rc genhtml_branch_coverage=1 00:22:45.166 --rc genhtml_function_coverage=1 00:22:45.166 --rc genhtml_legend=1 00:22:45.166 --rc geninfo_all_blocks=1 00:22:45.166 --rc geninfo_unexecuted_blocks=1 00:22:45.166 00:22:45.166 ' 00:22:45.166 19:31:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:45.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.166 --rc genhtml_branch_coverage=1 00:22:45.166 --rc genhtml_function_coverage=1 00:22:45.166 --rc genhtml_legend=1 00:22:45.166 --rc geninfo_all_blocks=1 00:22:45.166 --rc geninfo_unexecuted_blocks=1 00:22:45.166 00:22:45.166 ' 00:22:45.166 19:31:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:45.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.166 --rc genhtml_branch_coverage=1 00:22:45.166 --rc genhtml_function_coverage=1 00:22:45.166 --rc genhtml_legend=1 00:22:45.166 --rc geninfo_all_blocks=1 00:22:45.166 --rc geninfo_unexecuted_blocks=1 00:22:45.166 00:22:45.166 ' 00:22:45.166 19:31:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:45.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.166 --rc genhtml_branch_coverage=1 00:22:45.166 --rc genhtml_function_coverage=1 00:22:45.166 --rc genhtml_legend=1 00:22:45.166 --rc geninfo_all_blocks=1 00:22:45.166 --rc geninfo_unexecuted_blocks=1 00:22:45.166 00:22:45.166 ' 00:22:45.166 19:31:43 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.166 19:31:43 -- nvmf/common.sh@7 -- # uname -s 00:22:45.166 19:31:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.166 19:31:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.166 19:31:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.166 19:31:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.166 19:31:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.166 19:31:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.166 19:31:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.166 19:31:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.166 19:31:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.166 19:31:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.166 19:31:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.166 19:31:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.166 19:31:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.166 19:31:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.166 19:31:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.166 19:31:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.166 19:31:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.166 19:31:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.166 19:31:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.166 19:31:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.166 19:31:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.166 19:31:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.166 19:31:43 -- paths/export.sh@5 -- # export PATH 00:22:45.166 19:31:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.166 19:31:43 -- nvmf/common.sh@46 -- # : 0 00:22:45.166 19:31:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:45.166 19:31:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:45.166 19:31:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:45.166 19:31:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.166 19:31:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.166 19:31:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:45.166 19:31:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:45.166 19:31:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:45.166 19:31:43 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.166 19:31:43 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.166 19:31:43 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:45.166 19:31:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:45.166 19:31:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.166 19:31:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:45.166 19:31:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:45.166 19:31:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:45.166 19:31:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.166 19:31:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.166 19:31:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.166 19:31:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:45.166 19:31:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:45.166 19:31:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:45.166 19:31:43 -- common/autotest_common.sh@10 -- # set +x 00:22:47.068 19:31:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:47.068 19:31:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:47.068 19:31:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:47.068 19:31:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:47.068 19:31:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:47.068 19:31:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:47.068 19:31:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:47.068 19:31:45 -- nvmf/common.sh@294 -- # net_devs=() 00:22:47.069 19:31:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:47.069 19:31:45 -- nvmf/common.sh@295 -- # e810=() 00:22:47.069 19:31:45 -- nvmf/common.sh@295 -- # local -ga e810 00:22:47.069 19:31:45 -- nvmf/common.sh@296 -- # x722=() 00:22:47.069 19:31:45 -- nvmf/common.sh@296 -- # local -ga x722 00:22:47.069 19:31:45 -- nvmf/common.sh@297 -- # mlx=() 00:22:47.069 19:31:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:47.069 19:31:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.069 19:31:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:47.069 19:31:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:47.069 19:31:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:47.069 19:31:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:47.069 19:31:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:47.069 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:47.069 19:31:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:47.069 19:31:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:47.069 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:47.069 19:31:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:47.069 19:31:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:47.069 19:31:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.069 19:31:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:47.069 19:31:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.069 19:31:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:47.069 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:47.069 19:31:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.069 19:31:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:47.069 19:31:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.069 19:31:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:47.069 19:31:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.069 19:31:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:47.069 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:47.069 19:31:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.069 19:31:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:47.069 19:31:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:47.069 19:31:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:47.069 19:31:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.069 19:31:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.069 19:31:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.069 19:31:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:47.069 19:31:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.069 19:31:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.069 19:31:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:47.069 19:31:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.069 19:31:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.069 19:31:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:47.069 19:31:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:47.069 19:31:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.069 19:31:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.069 19:31:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.069 19:31:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.069 19:31:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:47.069 19:31:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.069 19:31:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.069 19:31:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.069 19:31:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:47.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:22:47.069 00:22:47.069 --- 10.0.0.2 ping statistics --- 00:22:47.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.069 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:47.069 19:31:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:22:47.069 00:22:47.069 --- 10.0.0.1 ping statistics --- 00:22:47.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.069 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:47.069 19:31:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.069 19:31:45 -- nvmf/common.sh@410 -- # return 0 00:22:47.069 19:31:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:47.069 19:31:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.069 19:31:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:47.069 19:31:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.069 19:31:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:47.069 19:31:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:47.069 19:31:45 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:47.069 19:31:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:47.069 19:31:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.069 19:31:45 -- common/autotest_common.sh@10 -- # set +x 00:22:47.069 19:31:45 -- nvmf/common.sh@469 -- # nvmfpid=1253991 00:22:47.069 19:31:45 -- nvmf/common.sh@470 -- # waitforlisten 1253991 00:22:47.069 19:31:45 -- common/autotest_common.sh@829 -- # '[' -z 1253991 ']' 00:22:47.069 19:31:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:47.069 19:31:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.069 19:31:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.069 19:31:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.069 19:31:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.069 19:31:45 -- common/autotest_common.sh@10 -- # set +x 00:22:47.069 [2024-11-17 19:31:45.242803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:47.069 [2024-11-17 19:31:45.242900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.069 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.069 [2024-11-17 19:31:45.312527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.327 [2024-11-17 19:31:45.406533] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:47.327 [2024-11-17 19:31:45.406710] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.327 [2024-11-17 19:31:45.406732] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.327 [2024-11-17 19:31:45.406746] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.327 [2024-11-17 19:31:45.406804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.328 [2024-11-17 19:31:45.406840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.328 [2024-11-17 19:31:45.406892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.328 [2024-11-17 19:31:45.406895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.261 19:31:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.261 19:31:46 -- common/autotest_common.sh@862 -- # return 0 00:22:48.261 19:31:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:48.261 19:31:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:48.261 19:31:46 -- common/autotest_common.sh@10 -- # set +x 00:22:48.261 19:31:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.261 19:31:46 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:48.261 19:31:46 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:48.261 19:31:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.262 19:31:46 -- common/autotest_common.sh@10 -- # set +x 00:22:48.262 Malloc0 00:22:48.262 19:31:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.262 19:31:46 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:48.262 19:31:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.262 19:31:46 -- common/autotest_common.sh@10 -- # set +x 00:22:48.262 Delay0 00:22:48.262 19:31:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.262 19:31:46 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.262 19:31:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.262 19:31:46 -- common/autotest_common.sh@10 -- # set +x 00:22:48.262 [2024-11-17 19:31:46.265348] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.262 19:31:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.262 19:31:46 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:48.262 19:31:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.262 19:31:46 -- common/autotest_common.sh@10 -- # set +x 00:22:48.262 19:31:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.262 19:31:46 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:48.262 19:31:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.262 19:31:46 -- common/autotest_common.sh@10 -- # set +x 00:22:48.262 19:31:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.262 19:31:46 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.262 19:31:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.262 19:31:46 -- common/autotest_common.sh@10 -- # set +x 00:22:48.262 [2024-11-17 19:31:46.293624] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.262 19:31:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.262 19:31:46 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:48.827 19:31:46 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:48.827 19:31:46 -- common/autotest_common.sh@1187 -- # local i=0 00:22:48.827 19:31:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:48.827 19:31:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:48.827 19:31:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:50.794 19:31:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:50.794 19:31:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:50.794 19:31:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:22:50.794 19:31:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:50.794 19:31:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:50.794 19:31:49 -- common/autotest_common.sh@1197 -- # return 0 00:22:50.795 19:31:49 -- target/initiator_timeout.sh@35 -- # fio_pid=1254576 00:22:50.795 19:31:49 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:50.795 19:31:49 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:50.795 [global] 00:22:50.795 thread=1 00:22:50.795 invalidate=1 00:22:50.795 rw=write 00:22:50.795 time_based=1 00:22:50.795 runtime=60 00:22:50.795 ioengine=libaio 00:22:50.795 direct=1 00:22:50.795 bs=4096 00:22:50.795 iodepth=1 00:22:50.795 norandommap=0 00:22:50.795 numjobs=1 00:22:50.795 00:22:50.795 verify_dump=1 00:22:50.795 verify_backlog=512 00:22:50.795 verify_state_save=0 00:22:50.795 do_verify=1 00:22:50.795 verify=crc32c-intel 00:22:50.795 [job0] 00:22:50.795 filename=/dev/nvme0n1 00:22:50.795 Could not set queue depth (nvme0n1) 00:22:51.052 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:51.052 fio-3.35 00:22:51.052 Starting 1 thread 00:22:54.332 19:31:52 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:54.332 19:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.332 19:31:52 -- common/autotest_common.sh@10 -- # set +x 00:22:54.332 true 00:22:54.332 19:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.332 19:31:52 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:54.332 19:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.332 19:31:52 -- common/autotest_common.sh@10 -- # set +x 00:22:54.332 true 00:22:54.332 19:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.332 19:31:52 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:54.332 19:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.332 19:31:52 -- common/autotest_common.sh@10 -- # set +x 00:22:54.332 true 00:22:54.332 19:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.332 19:31:52 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:54.332 19:31:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.332 19:31:52 -- common/autotest_common.sh@10 -- # set +x 00:22:54.332 true 00:22:54.332 19:31:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.332 19:31:52 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:56.858 19:31:55 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:56.858 19:31:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.858 19:31:55 -- common/autotest_common.sh@10 -- # set +x 00:22:56.858 true 00:22:56.858 19:31:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.858 19:31:55 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:56.858 19:31:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.858 19:31:55 -- common/autotest_common.sh@10 -- # set +x 00:22:56.858 true 00:22:56.858 19:31:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.858 19:31:55 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:56.858 19:31:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.858 19:31:55 -- common/autotest_common.sh@10 -- # set +x 00:22:56.858 true 00:22:56.858 19:31:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.858 19:31:55 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:56.858 19:31:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.858 19:31:55 -- common/autotest_common.sh@10 -- # set +x 00:22:56.858 true 00:22:56.858 19:31:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.858 19:31:55 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:56.858 19:31:55 -- target/initiator_timeout.sh@54 -- # wait 1254576 00:23:53.087 00:23:53.087 job0: (groupid=0, jobs=1): err= 0: pid=1254645: Sun Nov 17 19:32:49 2024 00:23:53.087 read: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60024msec) 00:23:53.087 slat (usec): min=4, max=4793, avg=22.42, stdev=149.61 00:23:53.087 clat (usec): min=220, max=40961k, avg=58322.65, stdev=1279605.97 00:23:53.087 lat (usec): min=228, max=40961k, avg=58345.07, stdev=1279606.08 00:23:53.087 clat percentiles (usec): 00:23:53.087 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 00:23:53.087 | 20.00th=[ 247], 30.00th=[ 253], 40.00th=[ 262], 00:23:53.087 | 50.00th=[ 297], 60.00th=[ 41157], 70.00th=[ 41157], 00:23:53.087 | 80.00th=[ 41157], 90.00th=[ 42206], 95.00th=[ 42206], 00:23:53.087 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42730], 00:23:53.087 | 99.95th=[17112761], 99.99th=[17112761] 00:23:53.087 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60024msec); 0 zone resets 00:23:53.087 slat (usec): min=6, max=29813, avg=43.24, stdev=931.26 00:23:53.087 clat (usec): min=169, max=407, avg=219.21, stdev=27.17 00:23:53.087 lat (usec): min=177, max=30089, avg=262.46, stdev=933.59 00:23:53.087 clat percentiles (usec): 00:23:53.087 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:23:53.087 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 221], 60.00th=[ 235], 00:23:53.087 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 260], 00:23:53.087 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 285], 99.95th=[ 408], 00:23:53.087 | 99.99th=[ 408] 00:23:53.087 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:23:53.087 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:23:53.087 lat (usec) : 250=55.47%, 500=22.51% 00:23:53.087 lat (msec) : 50=21.97%, >=2000=0.05% 00:23:53.087 cpu : usr=0.05%, sys=0.06%, ctx=2051, majf=0, minf=1 00:23:53.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.087 issued rwts: total=1024,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:53.087 00:23:53.087 Run status group 0 (all jobs): 00:23:53.087 READ: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60024-60024msec 00:23:53.087 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60024-60024msec 00:23:53.087 00:23:53.087 Disk stats (read/write): 00:23:53.087 nvme0n1: ios=1073/1024, merge=0/0, ticks=19466/215, in_queue=19681, util=99.95% 00:23:53.087 19:32:49 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:53.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:53.087 19:32:49 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:53.087 19:32:49 -- common/autotest_common.sh@1208 -- # local i=0 00:23:53.087 19:32:49 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:53.087 19:32:49 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:53.087 19:32:49 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:53.087 19:32:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:53.087 19:32:49 -- common/autotest_common.sh@1220 -- # return 0 00:23:53.087 19:32:49 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:53.087 19:32:49 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:53.087 nvmf hotplug test: fio successful as expected 00:23:53.087 19:32:49 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.087 19:32:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.087 19:32:49 -- common/autotest_common.sh@10 -- # set +x 00:23:53.087 19:32:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.087 19:32:49 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:53.087 19:32:49 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:53.087 19:32:49 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:53.087 19:32:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:53.087 19:32:49 -- nvmf/common.sh@116 -- # sync 00:23:53.087 19:32:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:53.087 19:32:49 -- nvmf/common.sh@119 -- # set +e 00:23:53.087 19:32:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:53.087 19:32:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:53.087 rmmod nvme_tcp 00:23:53.087 rmmod nvme_fabrics 00:23:53.087 rmmod nvme_keyring 00:23:53.087 19:32:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:53.087 19:32:49 -- nvmf/common.sh@123 -- # set -e 00:23:53.087 19:32:49 -- nvmf/common.sh@124 -- # return 0 00:23:53.087 19:32:49 -- nvmf/common.sh@477 -- # '[' -n 1253991 ']' 00:23:53.087 19:32:49 -- nvmf/common.sh@478 -- # killprocess 1253991 00:23:53.087 19:32:49 -- common/autotest_common.sh@936 -- # '[' -z 1253991 ']' 00:23:53.087 19:32:49 -- common/autotest_common.sh@940 -- # kill -0 1253991 00:23:53.087 19:32:49 -- common/autotest_common.sh@941 -- # uname 00:23:53.087 19:32:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:53.087 19:32:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1253991 00:23:53.087 19:32:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:53.087 19:32:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:53.087 19:32:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1253991' 00:23:53.087 killing process with pid 1253991 00:23:53.087 19:32:49 -- common/autotest_common.sh@955 -- # kill 1253991 00:23:53.087 19:32:49 -- common/autotest_common.sh@960 -- # wait 1253991 00:23:53.087 19:32:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:53.087 19:32:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:53.087 19:32:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:53.087 19:32:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.087 19:32:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:53.087 19:32:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.087 19:32:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.087 19:32:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.026 19:32:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:54.026 00:23:54.026 real 1m9.032s 00:23:54.026 user 4m14.131s 00:23:54.026 sys 0m6.659s 00:23:54.026 19:32:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:54.026 19:32:51 -- common/autotest_common.sh@10 -- # set +x 00:23:54.026 ************************************ 00:23:54.026 END TEST nvmf_initiator_timeout 00:23:54.026 ************************************ 00:23:54.026 19:32:51 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:54.026 19:32:51 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:23:54.026 19:32:51 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:23:54.026 19:32:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:54.026 19:32:51 -- common/autotest_common.sh@10 -- # set +x 00:23:55.927 19:32:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:55.927 19:32:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:55.927 19:32:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:55.927 19:32:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:55.927 19:32:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:55.927 19:32:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:55.927 19:32:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:55.927 19:32:54 -- nvmf/common.sh@294 -- # net_devs=() 00:23:55.927 19:32:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:55.927 19:32:54 -- nvmf/common.sh@295 -- # e810=() 00:23:55.927 19:32:54 -- nvmf/common.sh@295 -- # local -ga e810 00:23:55.927 19:32:54 -- nvmf/common.sh@296 -- # x722=() 00:23:55.927 19:32:54 -- nvmf/common.sh@296 -- # local -ga x722 00:23:55.927 19:32:54 -- nvmf/common.sh@297 -- # mlx=() 00:23:55.927 19:32:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:55.927 19:32:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.927 19:32:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:55.927 19:32:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:55.927 19:32:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:55.927 19:32:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:55.927 19:32:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:55.927 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:55.927 19:32:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:55.927 19:32:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:55.927 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:55.927 19:32:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:55.927 19:32:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:55.927 19:32:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:55.927 19:32:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.927 19:32:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:55.927 19:32:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.927 19:32:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:55.927 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:55.927 19:32:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.927 19:32:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:55.927 19:32:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.927 19:32:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:55.927 19:32:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.927 19:32:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:55.927 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:55.927 19:32:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.927 19:32:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:55.927 19:32:54 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.927 19:32:54 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:23:55.927 19:32:54 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:55.927 19:32:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:55.927 19:32:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:55.927 19:32:54 -- common/autotest_common.sh@10 -- # set +x 00:23:55.927 ************************************ 00:23:55.927 START TEST nvmf_perf_adq 00:23:55.927 ************************************ 00:23:55.927 19:32:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:55.927 * Looking for test storage... 00:23:55.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:55.927 19:32:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:55.927 19:32:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:55.927 19:32:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:55.927 19:32:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:55.927 19:32:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:55.927 19:32:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:55.927 19:32:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:55.927 19:32:54 -- scripts/common.sh@335 -- # IFS=.-: 00:23:55.927 19:32:54 -- scripts/common.sh@335 -- # read -ra ver1 00:23:55.927 19:32:54 -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.927 19:32:54 -- scripts/common.sh@336 -- # read -ra ver2 00:23:55.927 19:32:54 -- scripts/common.sh@337 -- # local 'op=<' 00:23:55.927 19:32:54 -- scripts/common.sh@339 -- # ver1_l=2 00:23:55.927 19:32:54 -- scripts/common.sh@340 -- # ver2_l=1 00:23:55.927 19:32:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:55.927 19:32:54 -- scripts/common.sh@343 -- # case "$op" in 00:23:55.927 19:32:54 -- scripts/common.sh@344 -- # : 1 00:23:55.927 19:32:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:55.927 19:32:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.927 19:32:54 -- scripts/common.sh@364 -- # decimal 1 00:23:55.927 19:32:54 -- scripts/common.sh@352 -- # local d=1 00:23:55.927 19:32:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.927 19:32:54 -- scripts/common.sh@354 -- # echo 1 00:23:55.927 19:32:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:55.927 19:32:54 -- scripts/common.sh@365 -- # decimal 2 00:23:56.185 19:32:54 -- scripts/common.sh@352 -- # local d=2 00:23:56.185 19:32:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.185 19:32:54 -- scripts/common.sh@354 -- # echo 2 00:23:56.185 19:32:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:56.185 19:32:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:56.185 19:32:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:56.185 19:32:54 -- scripts/common.sh@367 -- # return 0 00:23:56.185 19:32:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.185 19:32:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.185 --rc genhtml_branch_coverage=1 00:23:56.185 --rc genhtml_function_coverage=1 00:23:56.185 --rc genhtml_legend=1 00:23:56.185 --rc geninfo_all_blocks=1 00:23:56.185 --rc geninfo_unexecuted_blocks=1 00:23:56.185 00:23:56.185 ' 00:23:56.186 19:32:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:56.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.186 --rc genhtml_branch_coverage=1 00:23:56.186 --rc genhtml_function_coverage=1 00:23:56.186 --rc genhtml_legend=1 00:23:56.186 --rc geninfo_all_blocks=1 00:23:56.186 --rc geninfo_unexecuted_blocks=1 00:23:56.186 00:23:56.186 ' 00:23:56.186 19:32:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:56.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.186 --rc genhtml_branch_coverage=1 00:23:56.186 --rc genhtml_function_coverage=1 00:23:56.186 --rc genhtml_legend=1 00:23:56.186 --rc geninfo_all_blocks=1 00:23:56.186 --rc geninfo_unexecuted_blocks=1 00:23:56.186 00:23:56.186 ' 00:23:56.186 19:32:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:56.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.186 --rc genhtml_branch_coverage=1 00:23:56.186 --rc genhtml_function_coverage=1 00:23:56.186 --rc genhtml_legend=1 00:23:56.186 --rc geninfo_all_blocks=1 00:23:56.186 --rc geninfo_unexecuted_blocks=1 00:23:56.186 00:23:56.186 ' 00:23:56.186 19:32:54 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.186 19:32:54 -- nvmf/common.sh@7 -- # uname -s 00:23:56.186 19:32:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.186 19:32:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.186 19:32:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.186 19:32:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.186 19:32:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.186 19:32:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.186 19:32:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.186 19:32:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.186 19:32:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.186 19:32:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.186 19:32:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.186 19:32:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.186 19:32:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.186 19:32:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.186 19:32:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.186 19:32:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.186 19:32:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.186 19:32:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.186 19:32:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.186 19:32:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.186 19:32:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.186 19:32:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.186 19:32:54 -- paths/export.sh@5 -- # export PATH 00:23:56.186 19:32:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.186 19:32:54 -- nvmf/common.sh@46 -- # : 0 00:23:56.186 19:32:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:56.186 19:32:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:56.186 19:32:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:56.186 19:32:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.186 19:32:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.186 19:32:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:56.186 19:32:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:56.186 19:32:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:56.186 19:32:54 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:56.186 19:32:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:56.186 19:32:54 -- common/autotest_common.sh@10 -- # set +x 00:23:58.090 19:32:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:58.090 19:32:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:58.090 19:32:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:58.090 19:32:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:58.090 19:32:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:58.090 19:32:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:58.090 19:32:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:58.090 19:32:56 -- nvmf/common.sh@294 -- # net_devs=() 00:23:58.090 19:32:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:58.090 19:32:56 -- nvmf/common.sh@295 -- # e810=() 00:23:58.090 19:32:56 -- nvmf/common.sh@295 -- # local -ga e810 00:23:58.090 19:32:56 -- nvmf/common.sh@296 -- # x722=() 00:23:58.090 19:32:56 -- nvmf/common.sh@296 -- # local -ga x722 00:23:58.090 19:32:56 -- nvmf/common.sh@297 -- # mlx=() 00:23:58.090 19:32:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:58.090 19:32:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.090 19:32:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.090 19:32:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.090 19:32:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.091 19:32:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.091 19:32:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.091 19:32:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.091 19:32:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.091 19:32:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.091 19:32:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.091 19:32:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.091 19:32:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:58.091 19:32:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:58.091 19:32:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:58.091 19:32:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:58.091 19:32:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:58.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:58.091 19:32:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:58.091 19:32:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:58.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:58.091 19:32:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:58.091 19:32:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:58.091 19:32:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:58.091 19:32:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.091 19:32:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:58.091 19:32:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.091 19:32:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:58.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:58.091 19:32:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.091 19:32:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:58.091 19:32:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.091 19:32:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:58.091 19:32:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.091 19:32:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:58.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:58.091 19:32:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.091 19:32:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:58.091 19:32:56 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.091 19:32:56 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:58.091 19:32:56 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:58.091 19:32:56 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:23:58.091 19:32:56 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:59.029 19:32:57 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:01.562 19:32:59 -- target/perf_adq.sh@54 -- # sleep 5 00:24:06.832 19:33:04 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:06.832 19:33:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:06.832 19:33:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.832 19:33:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:06.832 19:33:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:06.832 19:33:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:06.832 19:33:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.832 19:33:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.832 19:33:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.832 19:33:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:06.832 19:33:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:06.832 19:33:04 -- common/autotest_common.sh@10 -- # set +x 00:24:06.832 19:33:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:06.832 19:33:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:06.832 19:33:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:06.832 19:33:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:06.832 19:33:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:06.832 19:33:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:06.832 19:33:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:06.832 19:33:04 -- nvmf/common.sh@294 -- # net_devs=() 00:24:06.832 19:33:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:06.832 19:33:04 -- nvmf/common.sh@295 -- # e810=() 00:24:06.832 19:33:04 -- nvmf/common.sh@295 -- # local -ga e810 00:24:06.832 19:33:04 -- nvmf/common.sh@296 -- # x722=() 00:24:06.832 19:33:04 -- nvmf/common.sh@296 -- # local -ga x722 00:24:06.832 19:33:04 -- nvmf/common.sh@297 -- # mlx=() 00:24:06.832 19:33:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:06.832 19:33:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.832 19:33:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:06.832 19:33:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:06.832 19:33:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:06.832 19:33:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:06.832 19:33:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:06.832 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:06.832 19:33:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:06.832 19:33:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:06.832 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:06.832 19:33:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:06.832 19:33:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:06.832 19:33:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.832 19:33:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:06.832 19:33:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.832 19:33:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:06.832 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:06.832 19:33:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.832 19:33:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:06.832 19:33:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.832 19:33:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:06.832 19:33:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.832 19:33:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:06.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:06.832 19:33:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.832 19:33:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:06.832 19:33:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:06.832 19:33:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:06.832 19:33:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:06.832 19:33:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.832 19:33:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.832 19:33:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.832 19:33:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:06.832 19:33:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.832 19:33:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.832 19:33:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:06.832 19:33:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.833 19:33:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.833 19:33:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:06.833 19:33:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:06.833 19:33:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.833 19:33:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.833 19:33:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.833 19:33:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.833 19:33:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:06.833 19:33:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.833 19:33:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.833 19:33:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.833 19:33:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:06.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:24:06.833 00:24:06.833 --- 10.0.0.2 ping statistics --- 00:24:06.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.833 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:24:06.833 19:33:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:24:06.833 00:24:06.833 --- 10.0.0.1 ping statistics --- 00:24:06.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.833 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:24:06.833 19:33:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.833 19:33:04 -- nvmf/common.sh@410 -- # return 0 00:24:06.833 19:33:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:06.833 19:33:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.833 19:33:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:06.833 19:33:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:06.833 19:33:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.833 19:33:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:06.833 19:33:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:06.833 19:33:04 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:06.833 19:33:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:06.833 19:33:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:06.833 19:33:04 -- common/autotest_common.sh@10 -- # set +x 00:24:06.833 19:33:04 -- nvmf/common.sh@469 -- # nvmfpid=1266687 00:24:06.833 19:33:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:06.833 19:33:04 -- nvmf/common.sh@470 -- # waitforlisten 1266687 00:24:06.833 19:33:04 -- common/autotest_common.sh@829 -- # '[' -z 1266687 ']' 00:24:06.833 19:33:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.833 19:33:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.833 19:33:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.833 19:33:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.833 19:33:04 -- common/autotest_common.sh@10 -- # set +x 00:24:06.833 [2024-11-17 19:33:04.839136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:06.833 [2024-11-17 19:33:04.839218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.833 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.833 [2024-11-17 19:33:04.907054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.833 [2024-11-17 19:33:04.996462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:06.833 [2024-11-17 19:33:04.996630] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.833 [2024-11-17 19:33:04.996648] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.833 [2024-11-17 19:33:04.996661] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.833 [2024-11-17 19:33:04.996714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.833 [2024-11-17 19:33:04.996740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.833 [2024-11-17 19:33:04.996798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.833 [2024-11-17 19:33:04.996801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.833 19:33:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.833 19:33:05 -- common/autotest_common.sh@862 -- # return 0 00:24:06.833 19:33:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:06.833 19:33:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:06.833 19:33:05 -- common/autotest_common.sh@10 -- # set +x 00:24:06.833 19:33:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.833 19:33:05 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:06.833 19:33:05 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:06.833 19:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.833 19:33:05 -- common/autotest_common.sh@10 -- # set +x 00:24:06.833 19:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.833 19:33:05 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:06.833 19:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.833 19:33:05 -- common/autotest_common.sh@10 -- # set +x 00:24:07.092 19:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.092 19:33:05 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:07.092 19:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.092 19:33:05 -- common/autotest_common.sh@10 -- # set +x 00:24:07.092 [2024-11-17 19:33:05.176248] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.092 19:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.092 19:33:05 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:07.092 19:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.092 19:33:05 -- common/autotest_common.sh@10 -- # set +x 00:24:07.092 Malloc1 00:24:07.092 19:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.092 19:33:05 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:07.092 19:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.092 19:33:05 -- common/autotest_common.sh@10 -- # set +x 00:24:07.092 19:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.092 19:33:05 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:07.092 19:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.092 19:33:05 -- common/autotest_common.sh@10 -- # set +x 00:24:07.092 19:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.092 19:33:05 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.092 19:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.092 19:33:05 -- common/autotest_common.sh@10 -- # set +x 00:24:07.092 [2024-11-17 19:33:05.227028] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.092 19:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.092 19:33:05 -- target/perf_adq.sh@73 -- # perfpid=1266713 00:24:07.092 19:33:05 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:07.092 19:33:05 -- target/perf_adq.sh@74 -- # sleep 2 00:24:07.092 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.993 19:33:07 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:08.993 19:33:07 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:08.993 19:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.993 19:33:07 -- target/perf_adq.sh@76 -- # wc -l 00:24:08.993 19:33:07 -- common/autotest_common.sh@10 -- # set +x 00:24:08.993 19:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.251 19:33:07 -- target/perf_adq.sh@76 -- # count=4 00:24:09.251 19:33:07 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:09.251 19:33:07 -- target/perf_adq.sh@81 -- # wait 1266713 00:24:17.362 Initializing NVMe Controllers 00:24:17.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:17.362 Initialization complete. Launching workers. 00:24:17.362 ======================================================== 00:24:17.363 Latency(us) 00:24:17.363 Device Information : IOPS MiB/s Average min max 00:24:17.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11209.70 43.79 5709.75 1571.10 9530.92 00:24:17.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11644.60 45.49 5497.38 1286.66 8531.55 00:24:17.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11001.50 42.97 5818.35 1510.98 9592.96 00:24:17.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10845.60 42.37 5901.48 1976.61 9931.70 00:24:17.363 ======================================================== 00:24:17.363 Total : 44701.38 174.61 5727.67 1286.66 9931.70 00:24:17.363 00:24:17.363 19:33:15 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:17.363 19:33:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:17.363 19:33:15 -- nvmf/common.sh@116 -- # sync 00:24:17.363 19:33:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:17.363 19:33:15 -- nvmf/common.sh@119 -- # set +e 00:24:17.363 19:33:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:17.363 19:33:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:17.363 rmmod nvme_tcp 00:24:17.363 rmmod nvme_fabrics 00:24:17.363 rmmod nvme_keyring 00:24:17.363 19:33:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:17.363 19:33:15 -- nvmf/common.sh@123 -- # set -e 00:24:17.363 19:33:15 -- nvmf/common.sh@124 -- # return 0 00:24:17.363 19:33:15 -- nvmf/common.sh@477 -- # '[' -n 1266687 ']' 00:24:17.363 19:33:15 -- nvmf/common.sh@478 -- # killprocess 1266687 00:24:17.363 19:33:15 -- common/autotest_common.sh@936 -- # '[' -z 1266687 ']' 00:24:17.363 19:33:15 -- common/autotest_common.sh@940 -- # kill -0 1266687 00:24:17.363 19:33:15 -- common/autotest_common.sh@941 -- # uname 00:24:17.363 19:33:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.363 19:33:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1266687 00:24:17.363 19:33:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:17.363 19:33:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:17.363 19:33:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1266687' 00:24:17.363 killing process with pid 1266687 00:24:17.363 19:33:15 -- common/autotest_common.sh@955 -- # kill 1266687 00:24:17.363 19:33:15 -- common/autotest_common.sh@960 -- # wait 1266687 00:24:17.621 19:33:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:17.621 19:33:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:17.621 19:33:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:17.621 19:33:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.621 19:33:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:17.621 19:33:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.621 19:33:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.621 19:33:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.523 19:33:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:19.523 19:33:17 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:19.523 19:33:17 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:20.459 19:33:18 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:23.002 19:33:20 -- target/perf_adq.sh@54 -- # sleep 5 00:24:28.333 19:33:25 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:28.333 19:33:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:28.333 19:33:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.333 19:33:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:28.333 19:33:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:28.333 19:33:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:28.333 19:33:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.333 19:33:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.333 19:33:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.333 19:33:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:28.333 19:33:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:28.333 19:33:25 -- common/autotest_common.sh@10 -- # set +x 00:24:28.333 19:33:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:28.333 19:33:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:28.333 19:33:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:28.333 19:33:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:28.333 19:33:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:28.333 19:33:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:28.333 19:33:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:28.333 19:33:25 -- nvmf/common.sh@294 -- # net_devs=() 00:24:28.333 19:33:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:28.333 19:33:25 -- nvmf/common.sh@295 -- # e810=() 00:24:28.333 19:33:25 -- nvmf/common.sh@295 -- # local -ga e810 00:24:28.333 19:33:25 -- nvmf/common.sh@296 -- # x722=() 00:24:28.333 19:33:25 -- nvmf/common.sh@296 -- # local -ga x722 00:24:28.333 19:33:25 -- nvmf/common.sh@297 -- # mlx=() 00:24:28.333 19:33:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:28.333 19:33:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.333 19:33:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:28.333 19:33:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:28.333 19:33:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:28.333 19:33:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.333 19:33:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:28.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:28.333 19:33:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.333 19:33:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:28.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:28.333 19:33:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:28.333 19:33:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.333 19:33:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.333 19:33:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.333 19:33:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.333 19:33:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:28.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:28.333 19:33:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.333 19:33:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.333 19:33:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.333 19:33:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.333 19:33:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.333 19:33:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:28.333 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:28.333 19:33:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.333 19:33:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:28.333 19:33:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:28.333 19:33:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:28.333 19:33:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:28.333 19:33:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.333 19:33:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.333 19:33:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.333 19:33:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:28.333 19:33:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.333 19:33:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.333 19:33:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:28.334 19:33:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.334 19:33:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.334 19:33:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:28.334 19:33:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:28.334 19:33:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.334 19:33:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.334 19:33:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.334 19:33:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.334 19:33:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:28.334 19:33:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.334 19:33:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.334 19:33:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.334 19:33:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:28.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:24:28.334 00:24:28.334 --- 10.0.0.2 ping statistics --- 00:24:28.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.334 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:24:28.334 19:33:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:28.334 00:24:28.334 --- 10.0.0.1 ping statistics --- 00:24:28.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.334 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:28.334 19:33:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.334 19:33:25 -- nvmf/common.sh@410 -- # return 0 00:24:28.334 19:33:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:28.334 19:33:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.334 19:33:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:28.334 19:33:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:28.334 19:33:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.334 19:33:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:28.334 19:33:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:28.334 19:33:25 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:28.334 19:33:25 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:28.334 19:33:25 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:28.334 19:33:25 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:28.334 net.core.busy_poll = 1 00:24:28.334 19:33:25 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:28.334 net.core.busy_read = 1 00:24:28.334 19:33:25 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:28.334 19:33:25 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:28.334 19:33:26 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:28.334 19:33:26 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:28.334 19:33:26 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:28.334 19:33:26 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:28.334 19:33:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:28.334 19:33:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 19:33:26 -- nvmf/common.sh@469 -- # nvmfpid=1270032 00:24:28.334 19:33:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:28.334 19:33:26 -- nvmf/common.sh@470 -- # waitforlisten 1270032 00:24:28.334 19:33:26 -- common/autotest_common.sh@829 -- # '[' -z 1270032 ']' 00:24:28.334 19:33:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.334 19:33:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.334 19:33:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.334 19:33:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 [2024-11-17 19:33:26.111623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:28.334 [2024-11-17 19:33:26.111720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.334 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.334 [2024-11-17 19:33:26.174281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.334 [2024-11-17 19:33:26.258482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:28.334 [2024-11-17 19:33:26.258632] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.334 [2024-11-17 19:33:26.258648] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.334 [2024-11-17 19:33:26.258660] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.334 [2024-11-17 19:33:26.258716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.334 [2024-11-17 19:33:26.258776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.334 [2024-11-17 19:33:26.258844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.334 [2024-11-17 19:33:26.258847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.334 19:33:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.334 19:33:26 -- common/autotest_common.sh@862 -- # return 0 00:24:28.334 19:33:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:28.334 19:33:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 19:33:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.334 19:33:26 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:28.334 19:33:26 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:28.334 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.334 19:33:26 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:28.334 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.334 19:33:26 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:28.334 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 [2024-11-17 19:33:26.466352] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.334 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.334 19:33:26 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:28.334 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 Malloc1 00:24:28.334 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.334 19:33:26 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.334 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.334 19:33:26 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:28.334 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.334 19:33:26 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.334 19:33:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.334 19:33:26 -- common/autotest_common.sh@10 -- # set +x 00:24:28.334 [2024-11-17 19:33:26.519233] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.334 19:33:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.335 19:33:26 -- target/perf_adq.sh@94 -- # perfpid=1270059 00:24:28.335 19:33:26 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:28.335 19:33:26 -- target/perf_adq.sh@95 -- # sleep 2 00:24:28.335 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.865 19:33:28 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:30.865 19:33:28 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:30.865 19:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.865 19:33:28 -- target/perf_adq.sh@97 -- # wc -l 00:24:30.865 19:33:28 -- common/autotest_common.sh@10 -- # set +x 00:24:30.865 19:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.865 19:33:28 -- target/perf_adq.sh@97 -- # count=2 00:24:30.865 19:33:28 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:30.865 19:33:28 -- target/perf_adq.sh@103 -- # wait 1270059 00:24:38.974 Initializing NVMe Controllers 00:24:38.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:38.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:38.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:38.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:38.975 Initialization complete. Launching workers. 00:24:38.975 ======================================================== 00:24:38.975 Latency(us) 00:24:38.975 Device Information : IOPS MiB/s Average min max 00:24:38.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6304.00 24.62 10152.75 1944.35 53028.56 00:24:38.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7761.70 30.32 8249.26 1308.68 53619.50 00:24:38.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6441.00 25.16 9940.97 1332.66 52476.38 00:24:38.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7374.50 28.81 8705.84 1646.73 53472.48 00:24:38.975 ======================================================== 00:24:38.975 Total : 27881.20 108.91 9191.22 1308.68 53619.50 00:24:38.975 00:24:38.975 19:33:36 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:38.975 19:33:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:38.975 19:33:36 -- nvmf/common.sh@116 -- # sync 00:24:38.975 19:33:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:38.975 19:33:36 -- nvmf/common.sh@119 -- # set +e 00:24:38.975 19:33:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:38.975 19:33:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:38.975 rmmod nvme_tcp 00:24:38.975 rmmod nvme_fabrics 00:24:38.975 rmmod nvme_keyring 00:24:38.975 19:33:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:38.975 19:33:36 -- nvmf/common.sh@123 -- # set -e 00:24:38.975 19:33:36 -- nvmf/common.sh@124 -- # return 0 00:24:38.975 19:33:36 -- nvmf/common.sh@477 -- # '[' -n 1270032 ']' 00:24:38.975 19:33:36 -- nvmf/common.sh@478 -- # killprocess 1270032 00:24:38.975 19:33:36 -- common/autotest_common.sh@936 -- # '[' -z 1270032 ']' 00:24:38.975 19:33:36 -- common/autotest_common.sh@940 -- # kill -0 1270032 00:24:38.975 19:33:36 -- common/autotest_common.sh@941 -- # uname 00:24:38.975 19:33:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:38.975 19:33:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1270032 00:24:38.975 19:33:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:38.975 19:33:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:38.975 19:33:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1270032' 00:24:38.975 killing process with pid 1270032 00:24:38.975 19:33:36 -- common/autotest_common.sh@955 -- # kill 1270032 00:24:38.975 19:33:36 -- common/autotest_common.sh@960 -- # wait 1270032 00:24:38.975 19:33:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:38.975 19:33:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:38.975 19:33:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:38.975 19:33:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.975 19:33:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:38.975 19:33:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.975 19:33:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.975 19:33:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.260 19:33:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:42.260 19:33:40 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:42.260 00:24:42.260 real 0m45.998s 00:24:42.260 user 2m38.396s 00:24:42.260 sys 0m9.894s 00:24:42.260 19:33:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:42.260 19:33:40 -- common/autotest_common.sh@10 -- # set +x 00:24:42.260 ************************************ 00:24:42.260 END TEST nvmf_perf_adq 00:24:42.260 ************************************ 00:24:42.260 19:33:40 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:42.260 19:33:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:42.260 19:33:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:42.260 19:33:40 -- common/autotest_common.sh@10 -- # set +x 00:24:42.260 ************************************ 00:24:42.260 START TEST nvmf_shutdown 00:24:42.260 ************************************ 00:24:42.260 19:33:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:42.260 * Looking for test storage... 00:24:42.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:42.260 19:33:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:42.260 19:33:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:42.260 19:33:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:42.260 19:33:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:42.260 19:33:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:42.260 19:33:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:42.260 19:33:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:42.260 19:33:40 -- scripts/common.sh@335 -- # IFS=.-: 00:24:42.260 19:33:40 -- scripts/common.sh@335 -- # read -ra ver1 00:24:42.260 19:33:40 -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.260 19:33:40 -- scripts/common.sh@336 -- # read -ra ver2 00:24:42.260 19:33:40 -- scripts/common.sh@337 -- # local 'op=<' 00:24:42.260 19:33:40 -- scripts/common.sh@339 -- # ver1_l=2 00:24:42.260 19:33:40 -- scripts/common.sh@340 -- # ver2_l=1 00:24:42.260 19:33:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:42.260 19:33:40 -- scripts/common.sh@343 -- # case "$op" in 00:24:42.260 19:33:40 -- scripts/common.sh@344 -- # : 1 00:24:42.260 19:33:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:42.260 19:33:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.260 19:33:40 -- scripts/common.sh@364 -- # decimal 1 00:24:42.260 19:33:40 -- scripts/common.sh@352 -- # local d=1 00:24:42.260 19:33:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.260 19:33:40 -- scripts/common.sh@354 -- # echo 1 00:24:42.260 19:33:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:42.260 19:33:40 -- scripts/common.sh@365 -- # decimal 2 00:24:42.260 19:33:40 -- scripts/common.sh@352 -- # local d=2 00:24:42.260 19:33:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.260 19:33:40 -- scripts/common.sh@354 -- # echo 2 00:24:42.260 19:33:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:42.260 19:33:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:42.260 19:33:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:42.260 19:33:40 -- scripts/common.sh@367 -- # return 0 00:24:42.260 19:33:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.260 19:33:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:42.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.260 --rc genhtml_branch_coverage=1 00:24:42.260 --rc genhtml_function_coverage=1 00:24:42.260 --rc genhtml_legend=1 00:24:42.260 --rc geninfo_all_blocks=1 00:24:42.260 --rc geninfo_unexecuted_blocks=1 00:24:42.260 00:24:42.260 ' 00:24:42.260 19:33:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:42.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.260 --rc genhtml_branch_coverage=1 00:24:42.260 --rc genhtml_function_coverage=1 00:24:42.260 --rc genhtml_legend=1 00:24:42.260 --rc geninfo_all_blocks=1 00:24:42.260 --rc geninfo_unexecuted_blocks=1 00:24:42.260 00:24:42.260 ' 00:24:42.260 19:33:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:42.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.260 --rc genhtml_branch_coverage=1 00:24:42.260 --rc genhtml_function_coverage=1 00:24:42.260 --rc genhtml_legend=1 00:24:42.260 --rc geninfo_all_blocks=1 00:24:42.260 --rc geninfo_unexecuted_blocks=1 00:24:42.260 00:24:42.260 ' 00:24:42.260 19:33:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:42.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.260 --rc genhtml_branch_coverage=1 00:24:42.260 --rc genhtml_function_coverage=1 00:24:42.260 --rc genhtml_legend=1 00:24:42.260 --rc geninfo_all_blocks=1 00:24:42.260 --rc geninfo_unexecuted_blocks=1 00:24:42.260 00:24:42.260 ' 00:24:42.260 19:33:40 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.260 19:33:40 -- nvmf/common.sh@7 -- # uname -s 00:24:42.260 19:33:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.260 19:33:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.260 19:33:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.260 19:33:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.260 19:33:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.260 19:33:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.260 19:33:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.260 19:33:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.260 19:33:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.260 19:33:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.260 19:33:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.260 19:33:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.260 19:33:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.260 19:33:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.260 19:33:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.261 19:33:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.261 19:33:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.261 19:33:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.261 19:33:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.261 19:33:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.261 19:33:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.261 19:33:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.261 19:33:40 -- paths/export.sh@5 -- # export PATH 00:24:42.261 19:33:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.261 19:33:40 -- nvmf/common.sh@46 -- # : 0 00:24:42.261 19:33:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:42.261 19:33:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:42.261 19:33:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:42.261 19:33:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.261 19:33:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.261 19:33:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:42.261 19:33:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:42.261 19:33:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:42.261 19:33:40 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:42.261 19:33:40 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:42.261 19:33:40 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:42.261 19:33:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:42.261 19:33:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:42.261 19:33:40 -- common/autotest_common.sh@10 -- # set +x 00:24:42.261 ************************************ 00:24:42.261 START TEST nvmf_shutdown_tc1 00:24:42.261 ************************************ 00:24:42.261 19:33:40 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:24:42.261 19:33:40 -- target/shutdown.sh@74 -- # starttarget 00:24:42.261 19:33:40 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:42.261 19:33:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:42.261 19:33:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.261 19:33:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:42.261 19:33:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:42.261 19:33:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:42.261 19:33:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.261 19:33:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.261 19:33:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.261 19:33:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:42.261 19:33:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:42.261 19:33:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:42.261 19:33:40 -- common/autotest_common.sh@10 -- # set +x 00:24:44.162 19:33:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:44.162 19:33:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:44.162 19:33:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:44.162 19:33:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:44.162 19:33:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:44.162 19:33:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:44.162 19:33:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:44.162 19:33:42 -- nvmf/common.sh@294 -- # net_devs=() 00:24:44.162 19:33:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:44.162 19:33:42 -- nvmf/common.sh@295 -- # e810=() 00:24:44.162 19:33:42 -- nvmf/common.sh@295 -- # local -ga e810 00:24:44.162 19:33:42 -- nvmf/common.sh@296 -- # x722=() 00:24:44.162 19:33:42 -- nvmf/common.sh@296 -- # local -ga x722 00:24:44.162 19:33:42 -- nvmf/common.sh@297 -- # mlx=() 00:24:44.162 19:33:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:44.162 19:33:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.162 19:33:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:44.162 19:33:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:44.162 19:33:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:44.162 19:33:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:44.162 19:33:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:44.162 19:33:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:44.162 19:33:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:44.162 19:33:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:44.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:44.163 19:33:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:44.163 19:33:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:44.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:44.163 19:33:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:44.163 19:33:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:44.163 19:33:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.163 19:33:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:44.163 19:33:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.163 19:33:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:44.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:44.163 19:33:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.163 19:33:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:44.163 19:33:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.163 19:33:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:44.163 19:33:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.163 19:33:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:44.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:44.163 19:33:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.163 19:33:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:44.163 19:33:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:44.163 19:33:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:44.163 19:33:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:44.163 19:33:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.163 19:33:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.163 19:33:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.163 19:33:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:44.163 19:33:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.163 19:33:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.163 19:33:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:44.163 19:33:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.163 19:33:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.163 19:33:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:44.163 19:33:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:44.163 19:33:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.163 19:33:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.421 19:33:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.421 19:33:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.421 19:33:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:44.421 19:33:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.421 19:33:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.421 19:33:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.421 19:33:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:44.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:24:44.421 00:24:44.421 --- 10.0.0.2 ping statistics --- 00:24:44.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.421 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:24:44.421 19:33:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:44.421 00:24:44.421 --- 10.0.0.1 ping statistics --- 00:24:44.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.421 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:44.421 19:33:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.421 19:33:42 -- nvmf/common.sh@410 -- # return 0 00:24:44.421 19:33:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:44.421 19:33:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.421 19:33:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:44.421 19:33:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:44.421 19:33:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.421 19:33:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:44.421 19:33:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:44.421 19:33:42 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:44.421 19:33:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:44.421 19:33:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.421 19:33:42 -- common/autotest_common.sh@10 -- # set +x 00:24:44.421 19:33:42 -- nvmf/common.sh@469 -- # nvmfpid=1273404 00:24:44.421 19:33:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:44.421 19:33:42 -- nvmf/common.sh@470 -- # waitforlisten 1273404 00:24:44.421 19:33:42 -- common/autotest_common.sh@829 -- # '[' -z 1273404 ']' 00:24:44.421 19:33:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.421 19:33:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.421 19:33:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.421 19:33:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.421 19:33:42 -- common/autotest_common.sh@10 -- # set +x 00:24:44.421 [2024-11-17 19:33:42.583768] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:44.421 [2024-11-17 19:33:42.583856] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.421 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.421 [2024-11-17 19:33:42.656948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.679 [2024-11-17 19:33:42.750017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:44.679 [2024-11-17 19:33:42.750169] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.679 [2024-11-17 19:33:42.750188] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.679 [2024-11-17 19:33:42.750203] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.679 [2024-11-17 19:33:42.750281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.679 [2024-11-17 19:33:42.750307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.679 [2024-11-17 19:33:42.750380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:44.679 [2024-11-17 19:33:42.750382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.610 19:33:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.611 19:33:43 -- common/autotest_common.sh@862 -- # return 0 00:24:45.611 19:33:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:45.611 19:33:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:45.611 19:33:43 -- common/autotest_common.sh@10 -- # set +x 00:24:45.611 19:33:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.611 19:33:43 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.611 19:33:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.611 19:33:43 -- common/autotest_common.sh@10 -- # set +x 00:24:45.611 [2024-11-17 19:33:43.630416] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.611 19:33:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.611 19:33:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:45.611 19:33:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:45.611 19:33:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.611 19:33:43 -- common/autotest_common.sh@10 -- # set +x 00:24:45.611 19:33:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.611 19:33:43 -- target/shutdown.sh@28 -- # cat 00:24:45.611 19:33:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:45.611 19:33:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.611 19:33:43 -- common/autotest_common.sh@10 -- # set +x 00:24:45.611 Malloc1 00:24:45.611 [2024-11-17 19:33:43.724133] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.611 Malloc2 00:24:45.611 Malloc3 00:24:45.611 Malloc4 00:24:45.869 Malloc5 00:24:45.869 Malloc6 00:24:45.869 Malloc7 00:24:45.869 Malloc8 00:24:45.869 Malloc9 00:24:46.128 Malloc10 00:24:46.128 19:33:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.128 19:33:44 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:46.128 19:33:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.128 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:24:46.128 19:33:44 -- target/shutdown.sh@78 -- # perfpid=1273724 00:24:46.128 19:33:44 -- target/shutdown.sh@79 -- # waitforlisten 1273724 /var/tmp/bdevperf.sock 00:24:46.128 19:33:44 -- common/autotest_common.sh@829 -- # '[' -z 1273724 ']' 00:24:46.128 19:33:44 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:46.128 19:33:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.128 19:33:44 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:46.128 19:33:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.128 19:33:44 -- nvmf/common.sh@520 -- # config=() 00:24:46.128 19:33:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.128 19:33:44 -- nvmf/common.sh@520 -- # local subsystem config 00:24:46.128 19:33:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.128 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.128 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:24:46.128 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.128 { 00:24:46.128 "params": { 00:24:46.128 "name": "Nvme$subsystem", 00:24:46.128 "trtype": "$TEST_TRANSPORT", 00:24:46.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.128 "adrfam": "ipv4", 00:24:46.128 "trsvcid": "$NVMF_PORT", 00:24:46.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.128 "hdgst": ${hdgst:-false}, 00:24:46.128 "ddgst": ${ddgst:-false} 00:24:46.128 }, 00:24:46.128 "method": "bdev_nvme_attach_controller" 00:24:46.128 } 00:24:46.128 EOF 00:24:46.128 )") 00:24:46.128 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.128 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.128 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.128 { 00:24:46.128 "params": { 00:24:46.128 "name": "Nvme$subsystem", 00:24:46.128 "trtype": "$TEST_TRANSPORT", 00:24:46.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.128 "adrfam": "ipv4", 00:24:46.128 "trsvcid": "$NVMF_PORT", 00:24:46.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.128 "hdgst": ${hdgst:-false}, 00:24:46.128 "ddgst": ${ddgst:-false} 00:24:46.128 }, 00:24:46.128 "method": "bdev_nvme_attach_controller" 00:24:46.128 } 00:24:46.128 EOF 00:24:46.128 )") 00:24:46.128 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.128 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.128 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.128 { 00:24:46.128 "params": { 00:24:46.128 "name": "Nvme$subsystem", 00:24:46.128 "trtype": "$TEST_TRANSPORT", 00:24:46.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.128 "adrfam": "ipv4", 00:24:46.128 "trsvcid": "$NVMF_PORT", 00:24:46.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.128 "hdgst": ${hdgst:-false}, 00:24:46.128 "ddgst": ${ddgst:-false} 00:24:46.128 }, 00:24:46.128 "method": "bdev_nvme_attach_controller" 00:24:46.128 } 00:24:46.128 EOF 00:24:46.128 )") 00:24:46.128 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.128 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.128 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.128 { 00:24:46.128 "params": { 00:24:46.128 "name": "Nvme$subsystem", 00:24:46.128 "trtype": "$TEST_TRANSPORT", 00:24:46.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.128 "adrfam": "ipv4", 00:24:46.128 "trsvcid": "$NVMF_PORT", 00:24:46.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.128 "hdgst": ${hdgst:-false}, 00:24:46.128 "ddgst": ${ddgst:-false} 00:24:46.128 }, 00:24:46.128 "method": "bdev_nvme_attach_controller" 00:24:46.128 } 00:24:46.128 EOF 00:24:46.128 )") 00:24:46.128 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.128 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.128 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.128 { 00:24:46.128 "params": { 00:24:46.128 "name": "Nvme$subsystem", 00:24:46.129 "trtype": "$TEST_TRANSPORT", 00:24:46.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "$NVMF_PORT", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.129 "hdgst": ${hdgst:-false}, 00:24:46.129 "ddgst": ${ddgst:-false} 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 } 00:24:46.129 EOF 00:24:46.129 )") 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.129 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.129 { 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme$subsystem", 00:24:46.129 "trtype": "$TEST_TRANSPORT", 00:24:46.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "$NVMF_PORT", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.129 "hdgst": ${hdgst:-false}, 00:24:46.129 "ddgst": ${ddgst:-false} 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 } 00:24:46.129 EOF 00:24:46.129 )") 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.129 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.129 { 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme$subsystem", 00:24:46.129 "trtype": "$TEST_TRANSPORT", 00:24:46.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "$NVMF_PORT", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.129 "hdgst": ${hdgst:-false}, 00:24:46.129 "ddgst": ${ddgst:-false} 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 } 00:24:46.129 EOF 00:24:46.129 )") 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.129 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.129 { 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme$subsystem", 00:24:46.129 "trtype": "$TEST_TRANSPORT", 00:24:46.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "$NVMF_PORT", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.129 "hdgst": ${hdgst:-false}, 00:24:46.129 "ddgst": ${ddgst:-false} 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 } 00:24:46.129 EOF 00:24:46.129 )") 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.129 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.129 { 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme$subsystem", 00:24:46.129 "trtype": "$TEST_TRANSPORT", 00:24:46.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "$NVMF_PORT", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.129 "hdgst": ${hdgst:-false}, 00:24:46.129 "ddgst": ${ddgst:-false} 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 } 00:24:46.129 EOF 00:24:46.129 )") 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.129 19:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.129 { 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme$subsystem", 00:24:46.129 "trtype": "$TEST_TRANSPORT", 00:24:46.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "$NVMF_PORT", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.129 "hdgst": ${hdgst:-false}, 00:24:46.129 "ddgst": ${ddgst:-false} 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 } 00:24:46.129 EOF 00:24:46.129 )") 00:24:46.129 19:33:44 -- nvmf/common.sh@542 -- # cat 00:24:46.129 19:33:44 -- nvmf/common.sh@544 -- # jq . 00:24:46.129 19:33:44 -- nvmf/common.sh@545 -- # IFS=, 00:24:46.129 19:33:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme1", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 },{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme2", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 },{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme3", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 },{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme4", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 },{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme5", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 },{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme6", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 },{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme7", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 },{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme8", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 },{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme9", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 },{ 00:24:46.129 "params": { 00:24:46.129 "name": "Nvme10", 00:24:46.129 "trtype": "tcp", 00:24:46.129 "traddr": "10.0.0.2", 00:24:46.129 "adrfam": "ipv4", 00:24:46.129 "trsvcid": "4420", 00:24:46.129 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:46.129 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:46.129 "hdgst": false, 00:24:46.129 "ddgst": false 00:24:46.129 }, 00:24:46.129 "method": "bdev_nvme_attach_controller" 00:24:46.129 }' 00:24:46.129 [2024-11-17 19:33:44.220052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:46.130 [2024-11-17 19:33:44.220127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:46.130 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.130 [2024-11-17 19:33:44.284441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.130 [2024-11-17 19:33:44.369487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.028 19:33:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.028 19:33:45 -- common/autotest_common.sh@862 -- # return 0 00:24:48.028 19:33:45 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:48.028 19:33:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.028 19:33:45 -- common/autotest_common.sh@10 -- # set +x 00:24:48.028 19:33:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.028 19:33:45 -- target/shutdown.sh@83 -- # kill -9 1273724 00:24:48.028 19:33:45 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:48.028 19:33:45 -- target/shutdown.sh@87 -- # sleep 1 00:24:48.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1273724 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:48.961 19:33:46 -- target/shutdown.sh@88 -- # kill -0 1273404 00:24:48.961 19:33:46 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:48.961 19:33:46 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:48.961 19:33:46 -- nvmf/common.sh@520 -- # config=() 00:24:48.961 19:33:46 -- nvmf/common.sh@520 -- # local subsystem config 00:24:48.961 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.961 { 00:24:48.961 "params": { 00:24:48.961 "name": "Nvme$subsystem", 00:24:48.961 "trtype": "$TEST_TRANSPORT", 00:24:48.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.961 "adrfam": "ipv4", 00:24:48.961 "trsvcid": "$NVMF_PORT", 00:24:48.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.961 "hdgst": ${hdgst:-false}, 00:24:48.961 "ddgst": ${ddgst:-false} 00:24:48.961 }, 00:24:48.961 "method": "bdev_nvme_attach_controller" 00:24:48.961 } 00:24:48.961 EOF 00:24:48.961 )") 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.961 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.961 { 00:24:48.961 "params": { 00:24:48.961 "name": "Nvme$subsystem", 00:24:48.961 "trtype": "$TEST_TRANSPORT", 00:24:48.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.961 "adrfam": "ipv4", 00:24:48.961 "trsvcid": "$NVMF_PORT", 00:24:48.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.961 "hdgst": ${hdgst:-false}, 00:24:48.961 "ddgst": ${ddgst:-false} 00:24:48.961 }, 00:24:48.961 "method": "bdev_nvme_attach_controller" 00:24:48.961 } 00:24:48.961 EOF 00:24:48.961 )") 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.961 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.961 { 00:24:48.961 "params": { 00:24:48.961 "name": "Nvme$subsystem", 00:24:48.961 "trtype": "$TEST_TRANSPORT", 00:24:48.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.961 "adrfam": "ipv4", 00:24:48.961 "trsvcid": "$NVMF_PORT", 00:24:48.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.961 "hdgst": ${hdgst:-false}, 00:24:48.961 "ddgst": ${ddgst:-false} 00:24:48.961 }, 00:24:48.961 "method": "bdev_nvme_attach_controller" 00:24:48.961 } 00:24:48.961 EOF 00:24:48.961 )") 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.961 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.961 { 00:24:48.961 "params": { 00:24:48.961 "name": "Nvme$subsystem", 00:24:48.961 "trtype": "$TEST_TRANSPORT", 00:24:48.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.961 "adrfam": "ipv4", 00:24:48.961 "trsvcid": "$NVMF_PORT", 00:24:48.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.961 "hdgst": ${hdgst:-false}, 00:24:48.961 "ddgst": ${ddgst:-false} 00:24:48.961 }, 00:24:48.961 "method": "bdev_nvme_attach_controller" 00:24:48.961 } 00:24:48.961 EOF 00:24:48.961 )") 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.961 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.961 { 00:24:48.961 "params": { 00:24:48.961 "name": "Nvme$subsystem", 00:24:48.961 "trtype": "$TEST_TRANSPORT", 00:24:48.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.961 "adrfam": "ipv4", 00:24:48.961 "trsvcid": "$NVMF_PORT", 00:24:48.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.961 "hdgst": ${hdgst:-false}, 00:24:48.961 "ddgst": ${ddgst:-false} 00:24:48.961 }, 00:24:48.961 "method": "bdev_nvme_attach_controller" 00:24:48.961 } 00:24:48.961 EOF 00:24:48.961 )") 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.961 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.961 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.961 { 00:24:48.961 "params": { 00:24:48.961 "name": "Nvme$subsystem", 00:24:48.961 "trtype": "$TEST_TRANSPORT", 00:24:48.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.961 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "$NVMF_PORT", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.962 "hdgst": ${hdgst:-false}, 00:24:48.962 "ddgst": ${ddgst:-false} 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 } 00:24:48.962 EOF 00:24:48.962 )") 00:24:48.962 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.962 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.962 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.962 { 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme$subsystem", 00:24:48.962 "trtype": "$TEST_TRANSPORT", 00:24:48.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "$NVMF_PORT", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.962 "hdgst": ${hdgst:-false}, 00:24:48.962 "ddgst": ${ddgst:-false} 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 } 00:24:48.962 EOF 00:24:48.962 )") 00:24:48.962 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.962 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.962 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.962 { 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme$subsystem", 00:24:48.962 "trtype": "$TEST_TRANSPORT", 00:24:48.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "$NVMF_PORT", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.962 "hdgst": ${hdgst:-false}, 00:24:48.962 "ddgst": ${ddgst:-false} 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 } 00:24:48.962 EOF 00:24:48.962 )") 00:24:48.962 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.962 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.962 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.962 { 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme$subsystem", 00:24:48.962 "trtype": "$TEST_TRANSPORT", 00:24:48.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "$NVMF_PORT", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.962 "hdgst": ${hdgst:-false}, 00:24:48.962 "ddgst": ${ddgst:-false} 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 } 00:24:48.962 EOF 00:24:48.962 )") 00:24:48.962 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.962 19:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.962 19:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.962 { 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme$subsystem", 00:24:48.962 "trtype": "$TEST_TRANSPORT", 00:24:48.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "$NVMF_PORT", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.962 "hdgst": ${hdgst:-false}, 00:24:48.962 "ddgst": ${ddgst:-false} 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 } 00:24:48.962 EOF 00:24:48.962 )") 00:24:48.962 19:33:46 -- nvmf/common.sh@542 -- # cat 00:24:48.962 19:33:46 -- nvmf/common.sh@544 -- # jq . 00:24:48.962 19:33:46 -- nvmf/common.sh@545 -- # IFS=, 00:24:48.962 19:33:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme1", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 },{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme2", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 },{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme3", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 },{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme4", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 },{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme5", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 },{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme6", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 },{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme7", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 },{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme8", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 },{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme9", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 },{ 00:24:48.962 "params": { 00:24:48.962 "name": "Nvme10", 00:24:48.962 "trtype": "tcp", 00:24:48.962 "traddr": "10.0.0.2", 00:24:48.962 "adrfam": "ipv4", 00:24:48.962 "trsvcid": "4420", 00:24:48.962 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:48.962 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:48.962 "hdgst": false, 00:24:48.962 "ddgst": false 00:24:48.962 }, 00:24:48.962 "method": "bdev_nvme_attach_controller" 00:24:48.962 }' 00:24:48.962 [2024-11-17 19:33:46.987152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:48.962 [2024-11-17 19:33:46.987238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274032 ] 00:24:48.962 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.962 [2024-11-17 19:33:47.052608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.962 [2024-11-17 19:33:47.140060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.335 Running I/O for 1 seconds... 00:24:51.268 00:24:51.268 Latency(us) 00:24:51.268 [2024-11-17T18:33:49.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme1n1 : 1.09 399.04 24.94 0.00 0.00 157532.15 18835.53 139810.13 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme2n1 : 1.08 402.87 25.18 0.00 0.00 155117.86 17185.00 122722.23 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme3n1 : 1.08 401.04 25.06 0.00 0.00 154583.19 18252.99 118061.89 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme4n1 : 1.09 397.44 24.84 0.00 0.00 154693.71 20291.89 115731.72 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme5n1 : 1.09 396.88 24.80 0.00 0.00 154031.75 18447.17 118838.61 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme6n1 : 1.10 395.80 24.74 0.00 0.00 153480.52 16505.36 119615.34 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme7n1 : 1.10 394.39 24.65 0.00 0.00 152996.64 15922.82 122722.23 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme8n1 : 1.11 396.33 24.77 0.00 0.00 152528.72 5655.51 127382.57 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme9n1 : 1.10 393.05 24.57 0.00 0.00 151682.12 14854.83 125052.40 00:24:51.268 [2024-11-17T18:33:49.535Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:51.268 Verification LBA range: start 0x0 length 0x400 00:24:51.268 Nvme10n1 : 1.11 390.92 24.43 0.00 0.00 151743.58 8398.32 128936.01 00:24:51.268 [2024-11-17T18:33:49.535Z] =================================================================================================================== 00:24:51.268 [2024-11-17T18:33:49.535Z] Total : 3967.76 247.98 0.00 0.00 153837.52 5655.51 139810.13 00:24:51.526 19:33:49 -- target/shutdown.sh@93 -- # stoptarget 00:24:51.526 19:33:49 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:51.526 19:33:49 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:51.526 19:33:49 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:51.526 19:33:49 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:51.526 19:33:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:51.526 19:33:49 -- nvmf/common.sh@116 -- # sync 00:24:51.526 19:33:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:51.526 19:33:49 -- nvmf/common.sh@119 -- # set +e 00:24:51.526 19:33:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:51.526 19:33:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:51.526 rmmod nvme_tcp 00:24:51.526 rmmod nvme_fabrics 00:24:51.526 rmmod nvme_keyring 00:24:51.784 19:33:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:51.784 19:33:49 -- nvmf/common.sh@123 -- # set -e 00:24:51.784 19:33:49 -- nvmf/common.sh@124 -- # return 0 00:24:51.784 19:33:49 -- nvmf/common.sh@477 -- # '[' -n 1273404 ']' 00:24:51.784 19:33:49 -- nvmf/common.sh@478 -- # killprocess 1273404 00:24:51.784 19:33:49 -- common/autotest_common.sh@936 -- # '[' -z 1273404 ']' 00:24:51.784 19:33:49 -- common/autotest_common.sh@940 -- # kill -0 1273404 00:24:51.784 19:33:49 -- common/autotest_common.sh@941 -- # uname 00:24:51.784 19:33:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:51.784 19:33:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1273404 00:24:51.784 19:33:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:51.784 19:33:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:51.784 19:33:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1273404' 00:24:51.784 killing process with pid 1273404 00:24:51.784 19:33:49 -- common/autotest_common.sh@955 -- # kill 1273404 00:24:51.784 19:33:49 -- common/autotest_common.sh@960 -- # wait 1273404 00:24:52.351 19:33:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:52.351 19:33:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:52.351 19:33:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:52.351 19:33:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.351 19:33:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:52.351 19:33:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.351 19:33:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.351 19:33:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.252 19:33:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:54.252 00:24:54.252 real 0m12.167s 00:24:54.252 user 0m35.057s 00:24:54.252 sys 0m3.317s 00:24:54.252 19:33:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:54.252 19:33:52 -- common/autotest_common.sh@10 -- # set +x 00:24:54.252 ************************************ 00:24:54.252 END TEST nvmf_shutdown_tc1 00:24:54.252 ************************************ 00:24:54.252 19:33:52 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:54.252 19:33:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:54.252 19:33:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:54.252 19:33:52 -- common/autotest_common.sh@10 -- # set +x 00:24:54.252 ************************************ 00:24:54.252 START TEST nvmf_shutdown_tc2 00:24:54.252 ************************************ 00:24:54.252 19:33:52 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:24:54.252 19:33:52 -- target/shutdown.sh@98 -- # starttarget 00:24:54.252 19:33:52 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:54.252 19:33:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:54.252 19:33:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.252 19:33:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:54.252 19:33:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:54.252 19:33:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:54.252 19:33:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.252 19:33:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.252 19:33:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.252 19:33:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:54.252 19:33:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:54.252 19:33:52 -- common/autotest_common.sh@10 -- # set +x 00:24:54.252 19:33:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:54.252 19:33:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:54.252 19:33:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:54.252 19:33:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:54.252 19:33:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:54.252 19:33:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:54.252 19:33:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:54.252 19:33:52 -- nvmf/common.sh@294 -- # net_devs=() 00:24:54.252 19:33:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:54.252 19:33:52 -- nvmf/common.sh@295 -- # e810=() 00:24:54.252 19:33:52 -- nvmf/common.sh@295 -- # local -ga e810 00:24:54.252 19:33:52 -- nvmf/common.sh@296 -- # x722=() 00:24:54.252 19:33:52 -- nvmf/common.sh@296 -- # local -ga x722 00:24:54.252 19:33:52 -- nvmf/common.sh@297 -- # mlx=() 00:24:54.252 19:33:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:54.252 19:33:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.252 19:33:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:54.252 19:33:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:54.252 19:33:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:54.252 19:33:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:54.252 19:33:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.252 19:33:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:54.252 19:33:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.252 19:33:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:54.252 19:33:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:54.252 19:33:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.252 19:33:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:54.252 19:33:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.252 19:33:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.252 19:33:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.252 19:33:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:54.252 19:33:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.252 19:33:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:54.252 19:33:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.252 19:33:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.252 19:33:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.252 19:33:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:54.252 19:33:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:54.252 19:33:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:54.252 19:33:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:54.252 19:33:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.252 19:33:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.252 19:33:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.252 19:33:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:54.252 19:33:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.252 19:33:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.252 19:33:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:54.252 19:33:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.252 19:33:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.252 19:33:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:54.252 19:33:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:54.252 19:33:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.252 19:33:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.511 19:33:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.511 19:33:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.511 19:33:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:54.511 19:33:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.511 19:33:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.511 19:33:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.511 19:33:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:54.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:24:54.511 00:24:54.511 --- 10.0.0.2 ping statistics --- 00:24:54.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.511 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:54.511 19:33:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:24:54.511 00:24:54.511 --- 10.0.0.1 ping statistics --- 00:24:54.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.511 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:24:54.511 19:33:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.511 19:33:52 -- nvmf/common.sh@410 -- # return 0 00:24:54.511 19:33:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:54.511 19:33:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.511 19:33:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:54.511 19:33:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:54.511 19:33:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.511 19:33:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:54.511 19:33:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:54.511 19:33:52 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:54.511 19:33:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:54.511 19:33:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.511 19:33:52 -- common/autotest_common.sh@10 -- # set +x 00:24:54.511 19:33:52 -- nvmf/common.sh@469 -- # nvmfpid=1274811 00:24:54.511 19:33:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:54.511 19:33:52 -- nvmf/common.sh@470 -- # waitforlisten 1274811 00:24:54.511 19:33:52 -- common/autotest_common.sh@829 -- # '[' -z 1274811 ']' 00:24:54.511 19:33:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.511 19:33:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.511 19:33:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.511 19:33:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.511 19:33:52 -- common/autotest_common.sh@10 -- # set +x 00:24:54.769 [2024-11-17 19:33:52.785423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:54.769 [2024-11-17 19:33:52.785502] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.769 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.769 [2024-11-17 19:33:52.853254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.769 [2024-11-17 19:33:52.947542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:54.769 [2024-11-17 19:33:52.947680] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.769 [2024-11-17 19:33:52.947698] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.769 [2024-11-17 19:33:52.947710] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.769 [2024-11-17 19:33:52.947762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.769 [2024-11-17 19:33:52.947821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.769 [2024-11-17 19:33:52.947888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:54.769 [2024-11-17 19:33:52.947891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.703 19:33:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.703 19:33:53 -- common/autotest_common.sh@862 -- # return 0 00:24:55.703 19:33:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:55.703 19:33:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:55.703 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:24:55.703 19:33:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.703 19:33:53 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.703 19:33:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.703 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:24:55.703 [2024-11-17 19:33:53.826523] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.703 19:33:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.703 19:33:53 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:55.703 19:33:53 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:55.703 19:33:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.703 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:24:55.703 19:33:53 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.703 19:33:53 -- target/shutdown.sh@28 -- # cat 00:24:55.703 19:33:53 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:55.703 19:33:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.703 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:24:55.703 Malloc1 00:24:55.703 [2024-11-17 19:33:53.916286] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.703 Malloc2 00:24:55.962 Malloc3 00:24:55.962 Malloc4 00:24:55.962 Malloc5 00:24:55.962 Malloc6 00:24:55.962 Malloc7 00:24:56.221 Malloc8 00:24:56.221 Malloc9 00:24:56.221 Malloc10 00:24:56.221 19:33:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.221 19:33:54 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:56.221 19:33:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:56.221 19:33:54 -- common/autotest_common.sh@10 -- # set +x 00:24:56.221 19:33:54 -- target/shutdown.sh@102 -- # perfpid=1275126 00:24:56.221 19:33:54 -- target/shutdown.sh@103 -- # waitforlisten 1275126 /var/tmp/bdevperf.sock 00:24:56.221 19:33:54 -- common/autotest_common.sh@829 -- # '[' -z 1275126 ']' 00:24:56.221 19:33:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.221 19:33:54 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:56.221 19:33:54 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:56.221 19:33:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.221 19:33:54 -- nvmf/common.sh@520 -- # config=() 00:24:56.221 19:33:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.221 19:33:54 -- nvmf/common.sh@520 -- # local subsystem config 00:24:56.221 19:33:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.221 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.221 19:33:54 -- common/autotest_common.sh@10 -- # set +x 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.221 { 00:24:56.221 "params": { 00:24:56.221 "name": "Nvme$subsystem", 00:24:56.221 "trtype": "$TEST_TRANSPORT", 00:24:56.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.221 "adrfam": "ipv4", 00:24:56.221 "trsvcid": "$NVMF_PORT", 00:24:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.221 "hdgst": ${hdgst:-false}, 00:24:56.221 "ddgst": ${ddgst:-false} 00:24:56.221 }, 00:24:56.221 "method": "bdev_nvme_attach_controller" 00:24:56.221 } 00:24:56.221 EOF 00:24:56.221 )") 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.221 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.221 { 00:24:56.221 "params": { 00:24:56.221 "name": "Nvme$subsystem", 00:24:56.221 "trtype": "$TEST_TRANSPORT", 00:24:56.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.221 "adrfam": "ipv4", 00:24:56.221 "trsvcid": "$NVMF_PORT", 00:24:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.221 "hdgst": ${hdgst:-false}, 00:24:56.221 "ddgst": ${ddgst:-false} 00:24:56.221 }, 00:24:56.221 "method": "bdev_nvme_attach_controller" 00:24:56.221 } 00:24:56.221 EOF 00:24:56.221 )") 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.221 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.221 { 00:24:56.221 "params": { 00:24:56.221 "name": "Nvme$subsystem", 00:24:56.221 "trtype": "$TEST_TRANSPORT", 00:24:56.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.221 "adrfam": "ipv4", 00:24:56.221 "trsvcid": "$NVMF_PORT", 00:24:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.221 "hdgst": ${hdgst:-false}, 00:24:56.221 "ddgst": ${ddgst:-false} 00:24:56.221 }, 00:24:56.221 "method": "bdev_nvme_attach_controller" 00:24:56.221 } 00:24:56.221 EOF 00:24:56.221 )") 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.221 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.221 { 00:24:56.221 "params": { 00:24:56.221 "name": "Nvme$subsystem", 00:24:56.221 "trtype": "$TEST_TRANSPORT", 00:24:56.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.221 "adrfam": "ipv4", 00:24:56.221 "trsvcid": "$NVMF_PORT", 00:24:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.221 "hdgst": ${hdgst:-false}, 00:24:56.221 "ddgst": ${ddgst:-false} 00:24:56.221 }, 00:24:56.221 "method": "bdev_nvme_attach_controller" 00:24:56.221 } 00:24:56.221 EOF 00:24:56.221 )") 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.221 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.221 { 00:24:56.221 "params": { 00:24:56.221 "name": "Nvme$subsystem", 00:24:56.221 "trtype": "$TEST_TRANSPORT", 00:24:56.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.221 "adrfam": "ipv4", 00:24:56.221 "trsvcid": "$NVMF_PORT", 00:24:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.221 "hdgst": ${hdgst:-false}, 00:24:56.221 "ddgst": ${ddgst:-false} 00:24:56.221 }, 00:24:56.221 "method": "bdev_nvme_attach_controller" 00:24:56.221 } 00:24:56.221 EOF 00:24:56.221 )") 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.221 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.221 { 00:24:56.221 "params": { 00:24:56.221 "name": "Nvme$subsystem", 00:24:56.221 "trtype": "$TEST_TRANSPORT", 00:24:56.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.221 "adrfam": "ipv4", 00:24:56.221 "trsvcid": "$NVMF_PORT", 00:24:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.221 "hdgst": ${hdgst:-false}, 00:24:56.221 "ddgst": ${ddgst:-false} 00:24:56.221 }, 00:24:56.221 "method": "bdev_nvme_attach_controller" 00:24:56.221 } 00:24:56.221 EOF 00:24:56.221 )") 00:24:56.221 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.222 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.222 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.222 { 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme$subsystem", 00:24:56.222 "trtype": "$TEST_TRANSPORT", 00:24:56.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "$NVMF_PORT", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.222 "hdgst": ${hdgst:-false}, 00:24:56.222 "ddgst": ${ddgst:-false} 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 } 00:24:56.222 EOF 00:24:56.222 )") 00:24:56.222 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.222 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.222 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.222 { 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme$subsystem", 00:24:56.222 "trtype": "$TEST_TRANSPORT", 00:24:56.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "$NVMF_PORT", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.222 "hdgst": ${hdgst:-false}, 00:24:56.222 "ddgst": ${ddgst:-false} 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 } 00:24:56.222 EOF 00:24:56.222 )") 00:24:56.222 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.222 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.222 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.222 { 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme$subsystem", 00:24:56.222 "trtype": "$TEST_TRANSPORT", 00:24:56.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "$NVMF_PORT", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.222 "hdgst": ${hdgst:-false}, 00:24:56.222 "ddgst": ${ddgst:-false} 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 } 00:24:56.222 EOF 00:24:56.222 )") 00:24:56.222 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.222 19:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.222 19:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.222 { 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme$subsystem", 00:24:56.222 "trtype": "$TEST_TRANSPORT", 00:24:56.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "$NVMF_PORT", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.222 "hdgst": ${hdgst:-false}, 00:24:56.222 "ddgst": ${ddgst:-false} 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 } 00:24:56.222 EOF 00:24:56.222 )") 00:24:56.222 19:33:54 -- nvmf/common.sh@542 -- # cat 00:24:56.222 19:33:54 -- nvmf/common.sh@544 -- # jq . 00:24:56.222 19:33:54 -- nvmf/common.sh@545 -- # IFS=, 00:24:56.222 19:33:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme1", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 },{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme2", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 },{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme3", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 },{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme4", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 },{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme5", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 },{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme6", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 },{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme7", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 },{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme8", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 },{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme9", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 },{ 00:24:56.222 "params": { 00:24:56.222 "name": "Nvme10", 00:24:56.222 "trtype": "tcp", 00:24:56.222 "traddr": "10.0.0.2", 00:24:56.222 "adrfam": "ipv4", 00:24:56.222 "trsvcid": "4420", 00:24:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:56.222 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:56.222 "hdgst": false, 00:24:56.222 "ddgst": false 00:24:56.222 }, 00:24:56.222 "method": "bdev_nvme_attach_controller" 00:24:56.222 }' 00:24:56.222 [2024-11-17 19:33:54.429292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:56.222 [2024-11-17 19:33:54.429363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275126 ] 00:24:56.222 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.481 [2024-11-17 19:33:54.492363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.481 [2024-11-17 19:33:54.577183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.380 Running I/O for 10 seconds... 00:24:58.947 19:33:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.947 19:33:56 -- common/autotest_common.sh@862 -- # return 0 00:24:58.947 19:33:56 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:58.947 19:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.947 19:33:56 -- common/autotest_common.sh@10 -- # set +x 00:24:58.947 19:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.947 19:33:56 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:58.947 19:33:56 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:58.947 19:33:56 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:58.947 19:33:56 -- target/shutdown.sh@57 -- # local ret=1 00:24:58.947 19:33:56 -- target/shutdown.sh@58 -- # local i 00:24:58.947 19:33:56 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:58.947 19:33:56 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:58.947 19:33:56 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:58.947 19:33:56 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:58.947 19:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.947 19:33:56 -- common/autotest_common.sh@10 -- # set +x 00:24:58.947 19:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.947 19:33:56 -- target/shutdown.sh@60 -- # read_io_count=211 00:24:58.947 19:33:56 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:24:58.947 19:33:56 -- target/shutdown.sh@64 -- # ret=0 00:24:58.947 19:33:56 -- target/shutdown.sh@65 -- # break 00:24:58.947 19:33:56 -- target/shutdown.sh@69 -- # return 0 00:24:58.947 19:33:56 -- target/shutdown.sh@109 -- # killprocess 1275126 00:24:58.947 19:33:56 -- common/autotest_common.sh@936 -- # '[' -z 1275126 ']' 00:24:58.947 19:33:56 -- common/autotest_common.sh@940 -- # kill -0 1275126 00:24:58.947 19:33:56 -- common/autotest_common.sh@941 -- # uname 00:24:58.947 19:33:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:58.947 19:33:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1275126 00:24:58.947 19:33:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:58.947 19:33:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:58.947 19:33:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1275126' 00:24:58.947 killing process with pid 1275126 00:24:58.947 19:33:57 -- common/autotest_common.sh@955 -- # kill 1275126 00:24:58.947 19:33:57 -- common/autotest_common.sh@960 -- # wait 1275126 00:24:58.947 Received shutdown signal, test time was about 0.674495 seconds 00:24:58.947 00:24:58.947 Latency(us) 00:24:58.947 [2024-11-17T18:33:57.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme1n1 : 0.65 415.62 25.98 0.00 0.00 148893.26 19709.35 116508.44 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme2n1 : 0.66 413.07 25.82 0.00 0.00 148453.85 23107.51 146023.92 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme3n1 : 0.66 417.48 26.09 0.00 0.00 144595.55 9417.77 134373.07 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme4n1 : 0.66 412.21 25.76 0.00 0.00 144467.26 27767.85 114955.00 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme5n1 : 0.66 410.37 25.65 0.00 0.00 143158.68 29321.29 114955.00 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme6n1 : 0.67 408.95 25.56 0.00 0.00 142356.97 26991.12 117285.17 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme7n1 : 0.67 406.61 25.41 0.00 0.00 141393.21 27379.48 117285.17 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme8n1 : 0.67 405.39 25.34 0.00 0.00 140260.60 25826.04 120392.06 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme9n1 : 0.67 403.74 25.23 0.00 0.00 142264.88 6189.51 125052.40 00:24:58.947 [2024-11-17T18:33:57.214Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:58.947 Verification LBA range: start 0x0 length 0x400 00:24:58.947 Nvme10n1 : 0.65 353.18 22.07 0.00 0.00 156088.10 25437.68 122722.23 00:24:58.947 [2024-11-17T18:33:57.214Z] =================================================================================================================== 00:24:58.947 [2024-11-17T18:33:57.214Z] Total : 4046.64 252.91 0.00 0.00 145013.79 6189.51 146023.92 00:24:59.206 19:33:57 -- target/shutdown.sh@112 -- # sleep 1 00:25:00.139 19:33:58 -- target/shutdown.sh@113 -- # kill -0 1274811 00:25:00.139 19:33:58 -- target/shutdown.sh@115 -- # stoptarget 00:25:00.139 19:33:58 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:00.139 19:33:58 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:00.139 19:33:58 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:00.139 19:33:58 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:00.139 19:33:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:00.139 19:33:58 -- nvmf/common.sh@116 -- # sync 00:25:00.139 19:33:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:00.139 19:33:58 -- nvmf/common.sh@119 -- # set +e 00:25:00.139 19:33:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:00.139 19:33:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:00.139 rmmod nvme_tcp 00:25:00.139 rmmod nvme_fabrics 00:25:00.139 rmmod nvme_keyring 00:25:00.139 19:33:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:00.139 19:33:58 -- nvmf/common.sh@123 -- # set -e 00:25:00.139 19:33:58 -- nvmf/common.sh@124 -- # return 0 00:25:00.139 19:33:58 -- nvmf/common.sh@477 -- # '[' -n 1274811 ']' 00:25:00.139 19:33:58 -- nvmf/common.sh@478 -- # killprocess 1274811 00:25:00.139 19:33:58 -- common/autotest_common.sh@936 -- # '[' -z 1274811 ']' 00:25:00.139 19:33:58 -- common/autotest_common.sh@940 -- # kill -0 1274811 00:25:00.397 19:33:58 -- common/autotest_common.sh@941 -- # uname 00:25:00.397 19:33:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.397 19:33:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1274811 00:25:00.397 19:33:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:00.397 19:33:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:00.397 19:33:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1274811' 00:25:00.397 killing process with pid 1274811 00:25:00.397 19:33:58 -- common/autotest_common.sh@955 -- # kill 1274811 00:25:00.397 19:33:58 -- common/autotest_common.sh@960 -- # wait 1274811 00:25:00.656 19:33:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:00.656 19:33:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:00.656 19:33:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:00.656 19:33:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:00.656 19:33:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:00.656 19:33:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.656 19:33:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.656 19:33:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.186 19:34:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:03.186 00:25:03.186 real 0m8.510s 00:25:03.186 user 0m26.824s 00:25:03.186 sys 0m1.539s 00:25:03.186 19:34:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:03.186 19:34:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.186 ************************************ 00:25:03.186 END TEST nvmf_shutdown_tc2 00:25:03.186 ************************************ 00:25:03.186 19:34:00 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:03.186 19:34:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:03.186 19:34:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:03.186 19:34:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.186 ************************************ 00:25:03.186 START TEST nvmf_shutdown_tc3 00:25:03.186 ************************************ 00:25:03.186 19:34:00 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:25:03.186 19:34:00 -- target/shutdown.sh@120 -- # starttarget 00:25:03.186 19:34:00 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:03.186 19:34:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:03.186 19:34:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.186 19:34:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:03.186 19:34:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:03.186 19:34:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:03.186 19:34:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.186 19:34:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.186 19:34:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.186 19:34:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:03.186 19:34:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:03.186 19:34:00 -- common/autotest_common.sh@10 -- # set +x 00:25:03.186 19:34:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:03.186 19:34:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:03.186 19:34:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:03.186 19:34:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:03.186 19:34:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:03.186 19:34:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:03.186 19:34:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:03.186 19:34:00 -- nvmf/common.sh@294 -- # net_devs=() 00:25:03.186 19:34:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:03.186 19:34:00 -- nvmf/common.sh@295 -- # e810=() 00:25:03.186 19:34:00 -- nvmf/common.sh@295 -- # local -ga e810 00:25:03.186 19:34:00 -- nvmf/common.sh@296 -- # x722=() 00:25:03.186 19:34:00 -- nvmf/common.sh@296 -- # local -ga x722 00:25:03.186 19:34:00 -- nvmf/common.sh@297 -- # mlx=() 00:25:03.186 19:34:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:03.186 19:34:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.186 19:34:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:03.186 19:34:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:03.186 19:34:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:03.186 19:34:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.186 19:34:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:03.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:03.186 19:34:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.186 19:34:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:03.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:03.186 19:34:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:03.186 19:34:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.186 19:34:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.186 19:34:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.186 19:34:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.186 19:34:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:03.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:03.186 19:34:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.186 19:34:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.186 19:34:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.186 19:34:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.186 19:34:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.186 19:34:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:03.186 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:03.186 19:34:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.186 19:34:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:03.186 19:34:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:03.186 19:34:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:03.186 19:34:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:03.186 19:34:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.186 19:34:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.186 19:34:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.186 19:34:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:03.186 19:34:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.186 19:34:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.186 19:34:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:03.186 19:34:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.186 19:34:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.186 19:34:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:03.186 19:34:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:03.186 19:34:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.186 19:34:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.186 19:34:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.186 19:34:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.186 19:34:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:03.186 19:34:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.186 19:34:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.186 19:34:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.187 19:34:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:03.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:25:03.187 00:25:03.187 --- 10.0.0.2 ping statistics --- 00:25:03.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.187 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:25:03.187 19:34:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:25:03.187 00:25:03.187 --- 10.0.0.1 ping statistics --- 00:25:03.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.187 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:03.187 19:34:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.187 19:34:01 -- nvmf/common.sh@410 -- # return 0 00:25:03.187 19:34:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:03.187 19:34:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.187 19:34:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:03.187 19:34:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:03.187 19:34:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.187 19:34:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:03.187 19:34:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:03.187 19:34:01 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:03.187 19:34:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:03.187 19:34:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:03.187 19:34:01 -- common/autotest_common.sh@10 -- # set +x 00:25:03.187 19:34:01 -- nvmf/common.sh@469 -- # nvmfpid=1276060 00:25:03.187 19:34:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:03.187 19:34:01 -- nvmf/common.sh@470 -- # waitforlisten 1276060 00:25:03.187 19:34:01 -- common/autotest_common.sh@829 -- # '[' -z 1276060 ']' 00:25:03.187 19:34:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.187 19:34:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.187 19:34:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.187 19:34:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.187 19:34:01 -- common/autotest_common.sh@10 -- # set +x 00:25:03.187 [2024-11-17 19:34:01.224144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:03.187 [2024-11-17 19:34:01.224230] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.187 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.187 [2024-11-17 19:34:01.291824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.187 [2024-11-17 19:34:01.380838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:03.187 [2024-11-17 19:34:01.380996] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.187 [2024-11-17 19:34:01.381015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.187 [2024-11-17 19:34:01.381029] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.187 [2024-11-17 19:34:01.381100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.187 [2024-11-17 19:34:01.381218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.187 [2024-11-17 19:34:01.381287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.187 [2024-11-17 19:34:01.381285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:04.120 19:34:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.120 19:34:02 -- common/autotest_common.sh@862 -- # return 0 00:25:04.121 19:34:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:04.121 19:34:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.121 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:25:04.121 19:34:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.121 19:34:02 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.121 19:34:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.121 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:25:04.121 [2024-11-17 19:34:02.210388] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.121 19:34:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.121 19:34:02 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:04.121 19:34:02 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:04.121 19:34:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:04.121 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:25:04.121 19:34:02 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.121 19:34:02 -- target/shutdown.sh@28 -- # cat 00:25:04.121 19:34:02 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:04.121 19:34:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.121 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:25:04.121 Malloc1 00:25:04.121 [2024-11-17 19:34:02.285536] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.121 Malloc2 00:25:04.121 Malloc3 00:25:04.379 Malloc4 00:25:04.379 Malloc5 00:25:04.379 Malloc6 00:25:04.379 Malloc7 00:25:04.379 Malloc8 00:25:04.638 Malloc9 00:25:04.638 Malloc10 00:25:04.638 19:34:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.638 19:34:02 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:04.638 19:34:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.638 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:25:04.638 19:34:02 -- target/shutdown.sh@124 -- # perfpid=1276248 00:25:04.638 19:34:02 -- target/shutdown.sh@125 -- # waitforlisten 1276248 /var/tmp/bdevperf.sock 00:25:04.638 19:34:02 -- common/autotest_common.sh@829 -- # '[' -z 1276248 ']' 00:25:04.638 19:34:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.638 19:34:02 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:04.638 19:34:02 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:04.638 19:34:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.638 19:34:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.638 19:34:02 -- nvmf/common.sh@520 -- # config=() 00:25:04.638 19:34:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.638 19:34:02 -- nvmf/common.sh@520 -- # local subsystem config 00:25:04.638 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:25:04.638 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.638 { 00:25:04.638 "params": { 00:25:04.638 "name": "Nvme$subsystem", 00:25:04.638 "trtype": "$TEST_TRANSPORT", 00:25:04.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.638 "adrfam": "ipv4", 00:25:04.638 "trsvcid": "$NVMF_PORT", 00:25:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.638 "hdgst": ${hdgst:-false}, 00:25:04.638 "ddgst": ${ddgst:-false} 00:25:04.638 }, 00:25:04.638 "method": "bdev_nvme_attach_controller" 00:25:04.638 } 00:25:04.638 EOF 00:25:04.638 )") 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.638 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.638 { 00:25:04.638 "params": { 00:25:04.638 "name": "Nvme$subsystem", 00:25:04.638 "trtype": "$TEST_TRANSPORT", 00:25:04.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.638 "adrfam": "ipv4", 00:25:04.638 "trsvcid": "$NVMF_PORT", 00:25:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.638 "hdgst": ${hdgst:-false}, 00:25:04.638 "ddgst": ${ddgst:-false} 00:25:04.638 }, 00:25:04.638 "method": "bdev_nvme_attach_controller" 00:25:04.638 } 00:25:04.638 EOF 00:25:04.638 )") 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.638 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.638 { 00:25:04.638 "params": { 00:25:04.638 "name": "Nvme$subsystem", 00:25:04.638 "trtype": "$TEST_TRANSPORT", 00:25:04.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.638 "adrfam": "ipv4", 00:25:04.638 "trsvcid": "$NVMF_PORT", 00:25:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.638 "hdgst": ${hdgst:-false}, 00:25:04.638 "ddgst": ${ddgst:-false} 00:25:04.638 }, 00:25:04.638 "method": "bdev_nvme_attach_controller" 00:25:04.638 } 00:25:04.638 EOF 00:25:04.638 )") 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.638 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.638 { 00:25:04.638 "params": { 00:25:04.638 "name": "Nvme$subsystem", 00:25:04.638 "trtype": "$TEST_TRANSPORT", 00:25:04.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.638 "adrfam": "ipv4", 00:25:04.638 "trsvcid": "$NVMF_PORT", 00:25:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.638 "hdgst": ${hdgst:-false}, 00:25:04.638 "ddgst": ${ddgst:-false} 00:25:04.638 }, 00:25:04.638 "method": "bdev_nvme_attach_controller" 00:25:04.638 } 00:25:04.638 EOF 00:25:04.638 )") 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.638 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.638 { 00:25:04.638 "params": { 00:25:04.638 "name": "Nvme$subsystem", 00:25:04.638 "trtype": "$TEST_TRANSPORT", 00:25:04.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.638 "adrfam": "ipv4", 00:25:04.638 "trsvcid": "$NVMF_PORT", 00:25:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.638 "hdgst": ${hdgst:-false}, 00:25:04.638 "ddgst": ${ddgst:-false} 00:25:04.638 }, 00:25:04.638 "method": "bdev_nvme_attach_controller" 00:25:04.638 } 00:25:04.638 EOF 00:25:04.638 )") 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.638 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.638 { 00:25:04.638 "params": { 00:25:04.638 "name": "Nvme$subsystem", 00:25:04.638 "trtype": "$TEST_TRANSPORT", 00:25:04.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.638 "adrfam": "ipv4", 00:25:04.638 "trsvcid": "$NVMF_PORT", 00:25:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.638 "hdgst": ${hdgst:-false}, 00:25:04.638 "ddgst": ${ddgst:-false} 00:25:04.638 }, 00:25:04.638 "method": "bdev_nvme_attach_controller" 00:25:04.638 } 00:25:04.638 EOF 00:25:04.638 )") 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.638 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.638 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.638 { 00:25:04.638 "params": { 00:25:04.638 "name": "Nvme$subsystem", 00:25:04.638 "trtype": "$TEST_TRANSPORT", 00:25:04.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.638 "adrfam": "ipv4", 00:25:04.638 "trsvcid": "$NVMF_PORT", 00:25:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.638 "hdgst": ${hdgst:-false}, 00:25:04.638 "ddgst": ${ddgst:-false} 00:25:04.638 }, 00:25:04.638 "method": "bdev_nvme_attach_controller" 00:25:04.638 } 00:25:04.638 EOF 00:25:04.638 )") 00:25:04.639 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.639 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.639 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.639 { 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme$subsystem", 00:25:04.639 "trtype": "$TEST_TRANSPORT", 00:25:04.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "$NVMF_PORT", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.639 "hdgst": ${hdgst:-false}, 00:25:04.639 "ddgst": ${ddgst:-false} 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 } 00:25:04.639 EOF 00:25:04.639 )") 00:25:04.639 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.639 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.639 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.639 { 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme$subsystem", 00:25:04.639 "trtype": "$TEST_TRANSPORT", 00:25:04.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "$NVMF_PORT", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.639 "hdgst": ${hdgst:-false}, 00:25:04.639 "ddgst": ${ddgst:-false} 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 } 00:25:04.639 EOF 00:25:04.639 )") 00:25:04.639 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.639 19:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.639 19:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.639 { 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme$subsystem", 00:25:04.639 "trtype": "$TEST_TRANSPORT", 00:25:04.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "$NVMF_PORT", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.639 "hdgst": ${hdgst:-false}, 00:25:04.639 "ddgst": ${ddgst:-false} 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 } 00:25:04.639 EOF 00:25:04.639 )") 00:25:04.639 19:34:02 -- nvmf/common.sh@542 -- # cat 00:25:04.639 19:34:02 -- nvmf/common.sh@544 -- # jq . 00:25:04.639 19:34:02 -- nvmf/common.sh@545 -- # IFS=, 00:25:04.639 19:34:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme1", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 },{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme2", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 },{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme3", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 },{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme4", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 },{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme5", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 },{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme6", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 },{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme7", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 },{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme8", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 },{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme9", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 },{ 00:25:04.639 "params": { 00:25:04.639 "name": "Nvme10", 00:25:04.639 "trtype": "tcp", 00:25:04.639 "traddr": "10.0.0.2", 00:25:04.639 "adrfam": "ipv4", 00:25:04.639 "trsvcid": "4420", 00:25:04.639 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:04.639 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:04.639 "hdgst": false, 00:25:04.639 "ddgst": false 00:25:04.639 }, 00:25:04.639 "method": "bdev_nvme_attach_controller" 00:25:04.639 }' 00:25:04.639 [2024-11-17 19:34:02.799329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:04.639 [2024-11-17 19:34:02.799406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276248 ] 00:25:04.639 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.639 [2024-11-17 19:34:02.863053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.898 [2024-11-17 19:34:02.948464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.798 Running I/O for 10 seconds... 00:25:07.057 19:34:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:07.057 19:34:05 -- common/autotest_common.sh@862 -- # return 0 00:25:07.057 19:34:05 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:07.057 19:34:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.057 19:34:05 -- common/autotest_common.sh@10 -- # set +x 00:25:07.057 19:34:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.057 19:34:05 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.057 19:34:05 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:07.057 19:34:05 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:07.057 19:34:05 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:07.057 19:34:05 -- target/shutdown.sh@57 -- # local ret=1 00:25:07.057 19:34:05 -- target/shutdown.sh@58 -- # local i 00:25:07.057 19:34:05 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:07.057 19:34:05 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:07.057 19:34:05 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:07.057 19:34:05 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:07.057 19:34:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.057 19:34:05 -- common/autotest_common.sh@10 -- # set +x 00:25:07.057 19:34:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.350 19:34:05 -- target/shutdown.sh@60 -- # read_io_count=211 00:25:07.350 19:34:05 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:25:07.350 19:34:05 -- target/shutdown.sh@64 -- # ret=0 00:25:07.350 19:34:05 -- target/shutdown.sh@65 -- # break 00:25:07.350 19:34:05 -- target/shutdown.sh@69 -- # return 0 00:25:07.350 19:34:05 -- target/shutdown.sh@134 -- # killprocess 1276060 00:25:07.350 19:34:05 -- common/autotest_common.sh@936 -- # '[' -z 1276060 ']' 00:25:07.350 19:34:05 -- common/autotest_common.sh@940 -- # kill -0 1276060 00:25:07.351 19:34:05 -- common/autotest_common.sh@941 -- # uname 00:25:07.351 19:34:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:07.351 19:34:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1276060 00:25:07.351 19:34:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:07.351 19:34:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:07.351 19:34:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1276060' 00:25:07.351 killing process with pid 1276060 00:25:07.351 19:34:05 -- common/autotest_common.sh@955 -- # kill 1276060 00:25:07.351 19:34:05 -- common/autotest_common.sh@960 -- # wait 1276060 00:25:07.351 [2024-11-17 19:34:05.371853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.371981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.372888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e70 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.351 [2024-11-17 19:34:05.374831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.374998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303820 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.375851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.352 [2024-11-17 19:34:05.375895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.375916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.352 [2024-11-17 19:34:05.375930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.375945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.352 [2024-11-17 19:34:05.375959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.375980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.352 [2024-11-17 19:34:05.375993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7fe0 is same with the state(5) to be set 00:25:07.352 [2024-11-17 19:34:05.376087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.352 [2024-11-17 19:34:05.376741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.352 [2024-11-17 19:34:05.376754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.376771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.376785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.376801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.376815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.376831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.376844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.376860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.376874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.376890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.376904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.376920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.376934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.376950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.376972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.376993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.353 [2024-11-17 19:34:05.377954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.353 [2024-11-17 19:34:05.377974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.354 [2024-11-17 19:34:05.377988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.354 [2024-11-17 19:34:05.378004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.354 [2024-11-17 19:34:05.378018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.354 [2024-11-17 19:34:05.378038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.354 [2024-11-17 19:34:05.378052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.354 [2024-11-17 19:34:05.378067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.354 [2024-11-17 19:34:05.378081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.354 [2024-11-17 19:34:05.378183] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x219fac0 was disconnected and freed. reset controller. 00:25:07.354 [2024-11-17 19:34:05.380409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380645] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.380891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381231] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381356] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301300 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.381645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.354 [2024-11-17 19:34:05.381707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c7fe0 (9): Bad file descriptor 00:25:07.354 [2024-11-17 19:34:05.384950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.384999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.385013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.385026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.385039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.385050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.385063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.385076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.385087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.385099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.354 [2024-11-17 19:34:05.385097] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:07.355 [2024-11-17 19:34:05.385112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385148] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 19:34:05.385275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 [2024-11-17 19:34:05.385326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 19:34:05.385351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 [2024-11-17 19:34:05.385389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 19:34:05.385414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 [2024-11-17 19:34:05.385457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 19:34:05.385482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 [2024-11-17 19:34:05.385519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 19:34:05.385543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 [2024-11-17 19:34:05.385580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 [2024-11-17 19:34:05.385609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with [2024-11-17 19:34:05.385621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31872 len:12the state(5) to be set 00:25:07.355 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.355 [2024-11-17 19:34:05.385636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.355 [2024-11-17 19:34:05.385648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.355 [2024-11-17 19:34:05.385654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 19:34:05.385668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.385693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.385695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.385710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.385718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.385727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.385741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.385743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13017b0 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.385756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.385785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.385814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.385843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.385878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.385907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.385935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.385973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.385986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.356 [2024-11-17 19:34:05.386014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233fe90 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.386101] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x233fe90 was disconnected and freed. reset controller. 00:25:07.356 [2024-11-17 19:34:05.386650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.386682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.386714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.386743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.386769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238f240 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.386871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.386892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.386928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.386956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.386983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.386996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4200 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.387056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.387071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.387086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.387107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.387121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.387135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.356 [2024-11-17 19:34:05.387148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.356 [2024-11-17 19:34:05.387161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2710 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387716] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.356 [2024-11-17 19:34:05.387822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.387998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.388436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1301c40 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with [2024-11-17 19:34:05.389390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controllethe state(5) to be set 00:25:07.357 r 00:25:07.357 [2024-11-17 19:34:05.389418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f4200 (9): Bad file descriptor 00:25:07.357 [2024-11-17 19:34:05.389443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.389975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.357 [2024-11-17 19:34:05.390333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390619] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.390769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13020f0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.391604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.391971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.391987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.392297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.392314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.392328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.392344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36736 len:12the state(5) to be set 00:25:07.358 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.392361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 [2024-11-17 19:34:05.392371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.392377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.392392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 19:34:05.392394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.358 the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.392407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.358 [2024-11-17 19:34:05.392410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.358 [2024-11-17 19:34:05.392419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37248 len:1the state(5) to be set 00:25:07.359 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:07.359 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37376 len:1[2024-11-17 19:34:05.392521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:07.359 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37632 len:12[2024-11-17 19:34:05.392587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:07.359 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37888 len:1[2024-11-17 19:34:05.392651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:07.359 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38272 len:12the state(5) to be set 00:25:07.359 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38528 len:12[2024-11-17 19:34:05.392825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:07.359 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38784 len:1the state(5) to be set 00:25:07.359 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 [2024-11-17 19:34:05.392930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.359 [2024-11-17 19:34:05.392943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39040 len:12[2024-11-17 19:34:05.392955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.359 the state(5) to be set 00:25:07.359 [2024-11-17 19:34:05.392970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.392973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.392986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.392985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39168 len:12the state(5) to be set 00:25:07.360 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31872 len:12[2024-11-17 19:34:05.393049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.393064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:07.360 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32512 len:128[2024-11-17 19:34:05.393118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13025a0 is same with [2024-11-17 19:34:05.393133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:07.360 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.360 [2024-11-17 19:34:05.393629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.393644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234c1e0 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.393733] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x234c1e0 was disconnected and freed. reset controller. 00:25:07.360 [2024-11-17 19:34:05.395525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:07.360 [2024-11-17 19:34:05.395559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f2710 (9): Bad file descriptor 00:25:07.360 [2024-11-17 19:34:05.395701] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:07.360 [2024-11-17 19:34:05.396753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238f240 (9): Bad file descriptor 00:25:07.360 [2024-11-17 19:34:05.396826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.360 [2024-11-17 19:34:05.396849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.396871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.360 [2024-11-17 19:34:05.396885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.396899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.360 [2024-11-17 19:34:05.396912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.396926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.360 [2024-11-17 19:34:05.396939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.396962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213190 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.397007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.360 [2024-11-17 19:34:05.397028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.397044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.360 [2024-11-17 19:34:05.397057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 [2024-11-17 19:34:05.397055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.360 [2024-11-17 19:34:05.397071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.360 [2024-11-17 19:34:05.397085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-17 19:34:05.397085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.360 the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with [2024-11-17 19:34:05.397102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:25:07.361 id:0 cdw10:00000000 cdw11:00000000 00:25:07.361 [2024-11-17 19:34:05.397116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with [2024-11-17 19:34:05.397117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:25:07.361 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.361 [2024-11-17 19:34:05.397131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221f4f0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with [2024-11-17 19:34:05.397176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:25:07.361 id:0 cdw10:00000000 cdw11:00000000 00:25:07.361 [2024-11-17 19:34:05.397205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.361 [2024-11-17 19:34:05.397218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.361 [2024-11-17 19:34:05.397230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-17 19:34:05.397242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.361 the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397256] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with [2024-11-17 19:34:05.397256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:25:07.361 id:0 cdw10:00000000 cdw11:00000000 00:25:07.361 [2024-11-17 19:34:05.397269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.361 [2024-11-17 19:34:05.397281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.361 [2024-11-17 19:34:05.397293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.361 [2024-11-17 19:34:05.397305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2371980 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397399] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:07.361 [2024-11-17 19:34:05.397411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397472] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:07.361 [2024-11-17 19:34:05.397478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.397874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302a50 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.398873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.398904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.398918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.398930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.398942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.398954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.398965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.398977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.398988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.361 [2024-11-17 19:34:05.399000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302ee0 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399220] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:07.362 [2024-11-17 19:34:05.399509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.399978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303390 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.400080] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:07.362 [2024-11-17 19:34:05.400727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.400750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.400773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.400793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.400810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.400824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.400840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.400854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.400870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.400884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.400899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.400913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.400929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.400942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.400965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.400978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.400994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.362 [2024-11-17 19:34:05.401369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bda80 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.401455] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21bda80 was disconnected and freed. reset controller. 00:25:07.362 [2024-11-17 19:34:05.401519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:07.362 [2024-11-17 19:34:05.401541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.362 [2024-11-17 19:34:05.401555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4200 is same with the state(5) to be set 00:25:07.362 [2024-11-17 19:34:05.401587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:07.362 [2024-11-17 19:34:05.401606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.401620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7fe0 is same with the state(5) to be set 00:25:07.363 [2024-11-17 19:34:05.402589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:07.363 [2024-11-17 19:34:05.402647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2395e80 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.402699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f4200 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.402723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c7fe0 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.402822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f2710 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.402847] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:07.363 [2024-11-17 19:34:05.402861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:07.363 [2024-11-17 19:34:05.402877] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:07.363 [2024-11-17 19:34:05.402897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.363 [2024-11-17 19:34:05.402912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.363 [2024-11-17 19:34:05.402925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.363 [2024-11-17 19:34:05.403257] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.363 [2024-11-17 19:34:05.403279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.363 [2024-11-17 19:34:05.403378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.363 [2024-11-17 19:34:05.403468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.363 [2024-11-17 19:34:05.403494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2395e80 with addr=10.0.0.2, port=4420 00:25:07.363 [2024-11-17 19:34:05.403511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2395e80 is same with the state(5) to be set 00:25:07.363 [2024-11-17 19:34:05.403527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:07.363 [2024-11-17 19:34:05.403540] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:07.363 [2024-11-17 19:34:05.403553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:07.363 [2024-11-17 19:34:05.403621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.363 [2024-11-17 19:34:05.403645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2395e80 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.403726] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:07.363 [2024-11-17 19:34:05.403745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:07.363 [2024-11-17 19:34:05.403759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:07.363 [2024-11-17 19:34:05.403812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.363 [2024-11-17 19:34:05.404183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.363 [2024-11-17 19:34:05.404357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.363 [2024-11-17 19:34:05.404441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.363 [2024-11-17 19:34:05.404466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c7fe0 with addr=10.0.0.2, port=4420 00:25:07.363 [2024-11-17 19:34:05.404483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7fe0 is same with the state(5) to be set 00:25:07.363 [2024-11-17 19:34:05.404538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c7fe0 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.404593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.363 [2024-11-17 19:34:05.404609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.363 [2024-11-17 19:34:05.404622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.363 [2024-11-17 19:34:05.404690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.363 [2024-11-17 19:34:05.406091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:07.363 [2024-11-17 19:34:05.406233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.363 [2024-11-17 19:34:05.406354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.363 [2024-11-17 19:34:05.406379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f2710 with addr=10.0.0.2, port=4420 00:25:07.363 [2024-11-17 19:34:05.406396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2710 is same with the state(5) to be set 00:25:07.363 [2024-11-17 19:34:05.406451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f2710 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.406506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:07.363 [2024-11-17 19:34:05.406522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:07.363 [2024-11-17 19:34:05.406536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:07.363 [2024-11-17 19:34:05.406589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.363 [2024-11-17 19:34:05.406705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.363 [2024-11-17 19:34:05.406729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.406746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.363 [2024-11-17 19:34:05.406760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.406774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.363 [2024-11-17 19:34:05.406787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.406801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.363 [2024-11-17 19:34:05.406815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.406828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238f960 is same with the state(5) to be set 00:25:07.363 [2024-11-17 19:34:05.406883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.363 [2024-11-17 19:34:05.406904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.406918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.363 [2024-11-17 19:34:05.406931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.406946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.363 [2024-11-17 19:34:05.406966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.406979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.363 [2024-11-17 19:34:05.406998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219c00 is same with the state(5) to be set 00:25:07.363 [2024-11-17 19:34:05.407040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213190 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.407070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221f4f0 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.407102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2371980 (9): Bad file descriptor 00:25:07.363 [2024-11-17 19:34:05.407249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.363 [2024-11-17 19:34:05.407272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.363 [2024-11-17 19:34:05.407311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.363 [2024-11-17 19:34:05.407343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.363 [2024-11-17 19:34:05.407374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.363 [2024-11-17 19:34:05.407404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.363 [2024-11-17 19:34:05.407435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.363 [2024-11-17 19:34:05.407465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.363 [2024-11-17 19:34:05.407494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.363 [2024-11-17 19:34:05.407524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.363 [2024-11-17 19:34:05.407540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.407958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.407980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.408381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.408398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.423161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.423236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.423252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.423269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.423284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.423300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.423314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.423331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.423345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.423361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.423377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.423393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.423407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.423423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.364 [2024-11-17 19:34:05.423437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.364 [2024-11-17 19:34:05.423452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.423984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.423998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.424013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.424028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.424043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.424057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.424072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bf060 is same with the state(5) to be set 00:25:07.365 [2024-11-17 19:34:05.426389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:07.365 [2024-11-17 19:34:05.426525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238f960 (9): Bad file descriptor 00:25:07.365 [2024-11-17 19:34:05.426569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219c00 (9): Bad file descriptor 00:25:07.365 [2024-11-17 19:34:05.426896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.365 [2024-11-17 19:34:05.427012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.365 [2024-11-17 19:34:05.427037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238f240 with addr=10.0.0.2, port=4420 00:25:07.365 [2024-11-17 19:34:05.427055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238f240 is same with the state(5) to be set 00:25:07.365 [2024-11-17 19:34:05.427130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.365 [2024-11-17 19:34:05.427630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.365 [2024-11-17 19:34:05.427644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.427976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.427989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.366 [2024-11-17 19:34:05.428841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.366 [2024-11-17 19:34:05.428858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.428872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.428888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.428901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.428917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.428930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.428947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.428976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.428989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.429005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.429019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.429035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.429048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.429064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.429078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.429092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b6d20 is same with the state(5) to be set 00:25:07.367 [2024-11-17 19:34:05.430311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.430971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.430984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.431015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.431045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.431074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.431103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.431132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.431162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.431191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.431225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.367 [2024-11-17 19:34:05.431254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.367 [2024-11-17 19:34:05.431270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.431981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.431996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.432024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.432053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.432082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.432111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.432141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.432170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.432198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.432227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.432256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.432270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b8300 is same with the state(5) to be set 00:25:07.368 [2024-11-17 19:34:05.433483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.368 [2024-11-17 19:34:05.433506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.368 [2024-11-17 19:34:05.433527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.433985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.433999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.434576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.434590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.444140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.444192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.444211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.444225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.444240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.444265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.369 [2024-11-17 19:34:05.444282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.369 [2024-11-17 19:34:05.444296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.444980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.444993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.445009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b98e0 is same with the state(5) to be set 00:25:07.370 [2024-11-17 19:34:05.446597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:07.370 [2024-11-17 19:34:05.446643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:07.370 [2024-11-17 19:34:05.446663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.370 [2024-11-17 19:34:05.446688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:07.370 [2024-11-17 19:34:05.446708] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:07.370 [2024-11-17 19:34:05.446779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238f240 (9): Bad file descriptor 00:25:07.370 [2024-11-17 19:34:05.446862] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.370 [2024-11-17 19:34:05.446888] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.370 [2024-11-17 19:34:05.446908] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.370 [2024-11-17 19:34:05.447006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:07.370 [2024-11-17 19:34:05.447032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:07.370 [2024-11-17 19:34:05.447218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.447316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.447342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f4200 with addr=10.0.0.2, port=4420 00:25:07.370 [2024-11-17 19:34:05.447360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4200 is same with the state(5) to be set 00:25:07.370 [2024-11-17 19:34:05.447444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.447535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.447559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2395e80 with addr=10.0.0.2, port=4420 00:25:07.370 [2024-11-17 19:34:05.447575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2395e80 is same with the state(5) to be set 00:25:07.370 [2024-11-17 19:34:05.447683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.447771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.447796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c7fe0 with addr=10.0.0.2, port=4420 00:25:07.370 [2024-11-17 19:34:05.447812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c7fe0 is same with the state(5) to be set 00:25:07.370 [2024-11-17 19:34:05.447890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.447973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.447997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f2710 with addr=10.0.0.2, port=4420 00:25:07.370 [2024-11-17 19:34:05.448014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2710 is same with the state(5) to be set 00:25:07.370 [2024-11-17 19:34:05.448094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.448183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.370 [2024-11-17 19:34:05.448207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2371980 with addr=10.0.0.2, port=4420 00:25:07.370 [2024-11-17 19:34:05.448223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2371980 is same with the state(5) to be set 00:25:07.370 [2024-11-17 19:34:05.448244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:07.370 [2024-11-17 19:34:05.448259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:07.370 [2024-11-17 19:34:05.448275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:07.370 [2024-11-17 19:34:05.449159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.449184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.370 [2024-11-17 19:34:05.449209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.370 [2024-11-17 19:34:05.449224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.449979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.449993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.371 [2024-11-17 19:34:05.450408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.371 [2024-11-17 19:34:05.450424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.450971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.450985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.451001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.451015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.451031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.451048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.451064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.451078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.451094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.451107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.451121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21baec0 is same with the state(5) to be set 00:25:07.372 [2024-11-17 19:34:05.452333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.372 [2024-11-17 19:34:05.452835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.372 [2024-11-17 19:34:05.452849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.373 [2024-11-17 19:34:05.452865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.373 [2024-11-17 19:34:05.452879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.373 [2024-11-17 19:34:05.452894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.373 [2024-11-17 19:34:05.452908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.373 [2024-11-17 19:34:05.452925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.373 [2024-11-17 19:34:05.452938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.373 [2024-11-17 19:34:05.452954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.373 [2024-11-17 19:34:05.452967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.373 [2024-11-17 19:34:05.452982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bc4a0 is same with the state(5) to be set 00:25:07.373 [2024-11-17 19:34:05.454337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.373 [2024-11-17 19:34:05.454368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:07.373 task offset: 34816 on job bdev=Nvme1n1 fails 00:25:07.373 00:25:07.373 Latency(us) 00:25:07.373 [2024-11-17T18:34:05.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme1n1 ended in about 0.69 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme1n1 : 0.69 362.49 22.66 92.43 0.00 139693.32 49321.91 116508.44 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme2n1 ended in about 0.70 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme2n1 : 0.70 354.28 22.14 34.29 0.00 161357.75 2997.67 157674.76 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme3n1 ended in about 0.71 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme3n1 : 0.71 354.97 22.19 90.51 0.00 139614.10 55535.69 115731.72 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme4n1 ended in about 0.74 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme4n1 : 0.74 338.27 21.14 86.25 0.00 145139.93 81944.27 114178.28 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme5n1 ended in about 0.75 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme5n1 : 0.75 336.84 21.05 85.89 0.00 144243.62 76118.85 113401.55 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme6n1 ended in about 0.76 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme6n1 : 0.76 331.17 20.70 84.44 0.00 145382.81 80390.83 113401.55 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme7n1 ended in about 0.76 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme7n1 : 0.76 328.55 20.53 83.77 0.00 145123.29 76895.57 114955.00 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme8n1 ended in about 0.77 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme8n1 : 0.77 327.83 20.49 27.43 0.00 156710.28 63691.28 121945.51 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme9n1 ended in about 0.71 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme9n1 : 0.71 351.32 21.96 29.39 0.00 151345.77 10485.76 121945.51 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.373 [2024-11-17T18:34:05.640Z] Job: Nvme10n1 ended in about 0.74 seconds with error 00:25:07.373 Verification LBA range: start 0x0 length 0x400 00:25:07.373 Nvme10n1 : 0.74 282.22 17.64 86.84 0.00 156606.77 100197.26 122722.23 00:25:07.373 [2024-11-17T18:34:05.640Z] =================================================================================================================== 00:25:07.373 [2024-11-17T18:34:05.640Z] Total : 3367.94 210.50 701.24 0.00 148060.67 2997.67 157674.76 00:25:07.373 [2024-11-17 19:34:05.481122] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:07.373 [2024-11-17 19:34:05.481198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:07.373 [2024-11-17 19:34:05.481442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.373 [2024-11-17 19:34:05.481563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.373 [2024-11-17 19:34:05.481600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2213190 with addr=10.0.0.2, port=4420 00:25:07.373 [2024-11-17 19:34:05.481632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213190 is same with the state(5) to be set 00:25:07.373 [2024-11-17 19:34:05.481741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.373 [2024-11-17 19:34:05.481860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.373 [2024-11-17 19:34:05.481885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221f4f0 with addr=10.0.0.2, port=4420 00:25:07.373 [2024-11-17 19:34:05.481902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221f4f0 is same with the state(5) to be set 00:25:07.373 [2024-11-17 19:34:05.481927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f4200 (9): Bad file descriptor 00:25:07.373 [2024-11-17 19:34:05.481950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2395e80 (9): Bad file descriptor 00:25:07.373 [2024-11-17 19:34:05.481970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c7fe0 (9): Bad file descriptor 00:25:07.373 [2024-11-17 19:34:05.481988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f2710 (9): Bad file descriptor 00:25:07.373 [2024-11-17 19:34:05.482007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2371980 (9): Bad file descriptor 00:25:07.373 [2024-11-17 19:34:05.482303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.373 [2024-11-17 19:34:05.482393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.373 [2024-11-17 19:34:05.482419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219c00 with addr=10.0.0.2, port=4420 00:25:07.373 [2024-11-17 19:34:05.482436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219c00 is same with the state(5) to be set 00:25:07.373 [2024-11-17 19:34:05.482526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.373 [2024-11-17 19:34:05.482617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.373 [2024-11-17 19:34:05.482644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238f960 with addr=10.0.0.2, port=4420 00:25:07.373 [2024-11-17 19:34:05.482661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238f960 is same with the state(5) to be set 00:25:07.373 [2024-11-17 19:34:05.482698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213190 (9): Bad file descriptor 00:25:07.373 [2024-11-17 19:34:05.482718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221f4f0 (9): Bad file descriptor 00:25:07.373 [2024-11-17 19:34:05.482735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:07.373 [2024-11-17 19:34:05.482749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:07.373 [2024-11-17 19:34:05.482765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:07.373 [2024-11-17 19:34:05.482787] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:07.373 [2024-11-17 19:34:05.482801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:07.373 [2024-11-17 19:34:05.482815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:07.373 [2024-11-17 19:34:05.482832] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.373 [2024-11-17 19:34:05.482846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.373 [2024-11-17 19:34:05.482860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.374 [2024-11-17 19:34:05.482878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:07.374 [2024-11-17 19:34:05.482892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:07.374 [2024-11-17 19:34:05.482913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:07.374 [2024-11-17 19:34:05.482932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:07.374 [2024-11-17 19:34:05.482946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:07.374 [2024-11-17 19:34:05.482959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:07.374 [2024-11-17 19:34:05.483006] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.374 [2024-11-17 19:34:05.483029] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.374 [2024-11-17 19:34:05.483047] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.374 [2024-11-17 19:34:05.483066] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.374 [2024-11-17 19:34:05.483085] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.374 [2024-11-17 19:34:05.483104] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.374 [2024-11-17 19:34:05.483123] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:07.374 [2024-11-17 19:34:05.483799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.374 [2024-11-17 19:34:05.483823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.374 [2024-11-17 19:34:05.483836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.374 [2024-11-17 19:34:05.483849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.374 [2024-11-17 19:34:05.483861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.374 [2024-11-17 19:34:05.483883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219c00 (9): Bad file descriptor 00:25:07.374 [2024-11-17 19:34:05.483904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238f960 (9): Bad file descriptor 00:25:07.374 [2024-11-17 19:34:05.483920] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:07.374 [2024-11-17 19:34:05.483934] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:07.374 [2024-11-17 19:34:05.483947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:07.374 [2024-11-17 19:34:05.483965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:07.374 [2024-11-17 19:34:05.483979] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:07.374 [2024-11-17 19:34:05.483992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:07.374 [2024-11-17 19:34:05.484056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:07.374 [2024-11-17 19:34:05.484080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.374 [2024-11-17 19:34:05.484093] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.374 [2024-11-17 19:34:05.484115] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:07.374 [2024-11-17 19:34:05.484130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:07.374 [2024-11-17 19:34:05.484143] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:07.374 [2024-11-17 19:34:05.484166] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:07.374 [2024-11-17 19:34:05.484181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:07.374 [2024-11-17 19:34:05.484195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:07.374 [2024-11-17 19:34:05.484247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.374 [2024-11-17 19:34:05.484265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.374 [2024-11-17 19:34:05.484345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.374 [2024-11-17 19:34:05.484438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.374 [2024-11-17 19:34:05.484463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238f240 with addr=10.0.0.2, port=4420 00:25:07.374 [2024-11-17 19:34:05.484479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238f240 is same with the state(5) to be set 00:25:07.374 [2024-11-17 19:34:05.484525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238f240 (9): Bad file descriptor 00:25:07.374 [2024-11-17 19:34:05.484568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:07.374 [2024-11-17 19:34:05.484586] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:07.374 [2024-11-17 19:34:05.484601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:07.374 [2024-11-17 19:34:05.484642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.660 19:34:05 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:07.660 19:34:05 -- target/shutdown.sh@138 -- # sleep 1 00:25:09.040 19:34:06 -- target/shutdown.sh@141 -- # kill -9 1276248 00:25:09.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1276248) - No such process 00:25:09.040 19:34:06 -- target/shutdown.sh@141 -- # true 00:25:09.040 19:34:06 -- target/shutdown.sh@143 -- # stoptarget 00:25:09.040 19:34:06 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:09.040 19:34:06 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:09.040 19:34:06 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:09.041 19:34:06 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:09.041 19:34:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:09.041 19:34:06 -- nvmf/common.sh@116 -- # sync 00:25:09.041 19:34:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:09.041 19:34:06 -- nvmf/common.sh@119 -- # set +e 00:25:09.041 19:34:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:09.041 19:34:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:09.041 rmmod nvme_tcp 00:25:09.041 rmmod nvme_fabrics 00:25:09.041 rmmod nvme_keyring 00:25:09.041 19:34:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:09.041 19:34:06 -- nvmf/common.sh@123 -- # set -e 00:25:09.041 19:34:06 -- nvmf/common.sh@124 -- # return 0 00:25:09.041 19:34:06 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:09.041 19:34:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:09.041 19:34:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:09.041 19:34:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:09.041 19:34:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.041 19:34:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:09.041 19:34:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.041 19:34:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.041 19:34:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.944 19:34:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:10.944 00:25:10.944 real 0m8.012s 00:25:10.944 user 0m20.970s 00:25:10.944 sys 0m1.471s 00:25:10.944 19:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:10.944 19:34:08 -- common/autotest_common.sh@10 -- # set +x 00:25:10.944 ************************************ 00:25:10.944 END TEST nvmf_shutdown_tc3 00:25:10.944 ************************************ 00:25:10.944 19:34:09 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:10.944 00:25:10.944 real 0m28.926s 00:25:10.944 user 1m22.989s 00:25:10.944 sys 0m6.447s 00:25:10.944 19:34:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:10.944 19:34:09 -- common/autotest_common.sh@10 -- # set +x 00:25:10.944 ************************************ 00:25:10.944 END TEST nvmf_shutdown 00:25:10.944 ************************************ 00:25:10.944 19:34:09 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:25:10.944 19:34:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:10.944 19:34:09 -- common/autotest_common.sh@10 -- # set +x 00:25:10.944 19:34:09 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:25:10.944 19:34:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.944 19:34:09 -- common/autotest_common.sh@10 -- # set +x 00:25:10.944 19:34:09 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:25:10.944 19:34:09 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:10.944 19:34:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:10.944 19:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:10.944 19:34:09 -- common/autotest_common.sh@10 -- # set +x 00:25:10.944 ************************************ 00:25:10.944 START TEST nvmf_multicontroller 00:25:10.944 ************************************ 00:25:10.944 19:34:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:10.944 * Looking for test storage... 00:25:10.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.944 19:34:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:10.944 19:34:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:10.944 19:34:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:10.944 19:34:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:10.944 19:34:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:10.944 19:34:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:10.944 19:34:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:10.944 19:34:09 -- scripts/common.sh@335 -- # IFS=.-: 00:25:10.944 19:34:09 -- scripts/common.sh@335 -- # read -ra ver1 00:25:10.944 19:34:09 -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.944 19:34:09 -- scripts/common.sh@336 -- # read -ra ver2 00:25:10.944 19:34:09 -- scripts/common.sh@337 -- # local 'op=<' 00:25:10.944 19:34:09 -- scripts/common.sh@339 -- # ver1_l=2 00:25:10.944 19:34:09 -- scripts/common.sh@340 -- # ver2_l=1 00:25:10.944 19:34:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:10.944 19:34:09 -- scripts/common.sh@343 -- # case "$op" in 00:25:10.944 19:34:09 -- scripts/common.sh@344 -- # : 1 00:25:10.944 19:34:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:10.944 19:34:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.944 19:34:09 -- scripts/common.sh@364 -- # decimal 1 00:25:10.944 19:34:09 -- scripts/common.sh@352 -- # local d=1 00:25:10.944 19:34:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.944 19:34:09 -- scripts/common.sh@354 -- # echo 1 00:25:10.944 19:34:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:10.944 19:34:09 -- scripts/common.sh@365 -- # decimal 2 00:25:10.944 19:34:09 -- scripts/common.sh@352 -- # local d=2 00:25:10.944 19:34:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.944 19:34:09 -- scripts/common.sh@354 -- # echo 2 00:25:10.944 19:34:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:10.944 19:34:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:10.944 19:34:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:10.944 19:34:09 -- scripts/common.sh@367 -- # return 0 00:25:10.944 19:34:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.944 19:34:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:10.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.944 --rc genhtml_branch_coverage=1 00:25:10.944 --rc genhtml_function_coverage=1 00:25:10.944 --rc genhtml_legend=1 00:25:10.944 --rc geninfo_all_blocks=1 00:25:10.944 --rc geninfo_unexecuted_blocks=1 00:25:10.944 00:25:10.944 ' 00:25:10.944 19:34:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:10.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.944 --rc genhtml_branch_coverage=1 00:25:10.945 --rc genhtml_function_coverage=1 00:25:10.945 --rc genhtml_legend=1 00:25:10.945 --rc geninfo_all_blocks=1 00:25:10.945 --rc geninfo_unexecuted_blocks=1 00:25:10.945 00:25:10.945 ' 00:25:10.945 19:34:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:10.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.945 --rc genhtml_branch_coverage=1 00:25:10.945 --rc genhtml_function_coverage=1 00:25:10.945 --rc genhtml_legend=1 00:25:10.945 --rc geninfo_all_blocks=1 00:25:10.945 --rc geninfo_unexecuted_blocks=1 00:25:10.945 00:25:10.945 ' 00:25:10.945 19:34:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:10.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.945 --rc genhtml_branch_coverage=1 00:25:10.945 --rc genhtml_function_coverage=1 00:25:10.945 --rc genhtml_legend=1 00:25:10.945 --rc geninfo_all_blocks=1 00:25:10.945 --rc geninfo_unexecuted_blocks=1 00:25:10.945 00:25:10.945 ' 00:25:10.945 19:34:09 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.945 19:34:09 -- nvmf/common.sh@7 -- # uname -s 00:25:10.945 19:34:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.945 19:34:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.945 19:34:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.945 19:34:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.945 19:34:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.945 19:34:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.945 19:34:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.945 19:34:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.945 19:34:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.945 19:34:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.945 19:34:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.945 19:34:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.945 19:34:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.945 19:34:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.945 19:34:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.945 19:34:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.945 19:34:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.945 19:34:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.945 19:34:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.945 19:34:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.945 19:34:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.945 19:34:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.945 19:34:09 -- paths/export.sh@5 -- # export PATH 00:25:10.945 19:34:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.945 19:34:09 -- nvmf/common.sh@46 -- # : 0 00:25:10.945 19:34:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:10.945 19:34:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:10.945 19:34:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:10.945 19:34:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.945 19:34:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.945 19:34:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:10.945 19:34:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:10.945 19:34:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:10.945 19:34:09 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:10.945 19:34:09 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:10.945 19:34:09 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:10.945 19:34:09 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:10.945 19:34:09 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.945 19:34:09 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:10.945 19:34:09 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:10.945 19:34:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:10.945 19:34:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.945 19:34:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:10.945 19:34:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:10.945 19:34:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:10.945 19:34:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.945 19:34:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.945 19:34:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.945 19:34:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:10.945 19:34:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:10.945 19:34:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:10.945 19:34:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.477 19:34:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:13.477 19:34:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:13.477 19:34:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:13.477 19:34:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:13.477 19:34:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:13.477 19:34:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:13.477 19:34:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:13.477 19:34:11 -- nvmf/common.sh@294 -- # net_devs=() 00:25:13.477 19:34:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:13.477 19:34:11 -- nvmf/common.sh@295 -- # e810=() 00:25:13.477 19:34:11 -- nvmf/common.sh@295 -- # local -ga e810 00:25:13.477 19:34:11 -- nvmf/common.sh@296 -- # x722=() 00:25:13.477 19:34:11 -- nvmf/common.sh@296 -- # local -ga x722 00:25:13.477 19:34:11 -- nvmf/common.sh@297 -- # mlx=() 00:25:13.477 19:34:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:13.477 19:34:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.477 19:34:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:13.477 19:34:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:13.477 19:34:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:13.477 19:34:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:13.477 19:34:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.477 19:34:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:13.477 19:34:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.477 19:34:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:13.477 19:34:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:13.478 19:34:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:13.478 19:34:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:13.478 19:34:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:13.478 19:34:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.478 19:34:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:13.478 19:34:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.478 19:34:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.478 19:34:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.478 19:34:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:13.478 19:34:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.478 19:34:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:13.478 19:34:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.478 19:34:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.478 19:34:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.478 19:34:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:13.478 19:34:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:13.478 19:34:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:13.478 19:34:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:13.478 19:34:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:13.478 19:34:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.478 19:34:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.478 19:34:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.478 19:34:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:13.478 19:34:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.478 19:34:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.478 19:34:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:13.478 19:34:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.478 19:34:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.478 19:34:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:13.478 19:34:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:13.478 19:34:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.478 19:34:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.478 19:34:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.478 19:34:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.478 19:34:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:13.478 19:34:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.478 19:34:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.478 19:34:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.478 19:34:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:13.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:25:13.478 00:25:13.478 --- 10.0.0.2 ping statistics --- 00:25:13.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.478 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:25:13.478 19:34:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:25:13.478 00:25:13.478 --- 10.0.0.1 ping statistics --- 00:25:13.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.478 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:25:13.478 19:34:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.478 19:34:11 -- nvmf/common.sh@410 -- # return 0 00:25:13.478 19:34:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:13.478 19:34:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.478 19:34:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:13.478 19:34:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:13.478 19:34:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.478 19:34:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:13.478 19:34:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:13.478 19:34:11 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:13.478 19:34:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:13.478 19:34:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.478 19:34:11 -- common/autotest_common.sh@10 -- # set +x 00:25:13.478 19:34:11 -- nvmf/common.sh@469 -- # nvmfpid=1278809 00:25:13.478 19:34:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:13.478 19:34:11 -- nvmf/common.sh@470 -- # waitforlisten 1278809 00:25:13.478 19:34:11 -- common/autotest_common.sh@829 -- # '[' -z 1278809 ']' 00:25:13.478 19:34:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.478 19:34:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.478 19:34:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.478 19:34:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.478 19:34:11 -- common/autotest_common.sh@10 -- # set +x 00:25:13.478 [2024-11-17 19:34:11.408666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:13.478 [2024-11-17 19:34:11.408779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.478 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.478 [2024-11-17 19:34:11.476852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:13.478 [2024-11-17 19:34:11.565586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:13.478 [2024-11-17 19:34:11.565764] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.478 [2024-11-17 19:34:11.565787] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.478 [2024-11-17 19:34:11.565803] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.478 [2024-11-17 19:34:11.565902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.478 [2024-11-17 19:34:11.566063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.478 [2024-11-17 19:34:11.566066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.412 19:34:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.412 19:34:12 -- common/autotest_common.sh@862 -- # return 0 00:25:14.412 19:34:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:14.412 19:34:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 19:34:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.412 19:34:12 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 [2024-11-17 19:34:12.390822] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 Malloc0 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 [2024-11-17 19:34:12.455707] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 [2024-11-17 19:34:12.463566] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 Malloc1 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:14.412 19:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 19:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.412 19:34:12 -- host/multicontroller.sh@44 -- # bdevperf_pid=1278968 00:25:14.412 19:34:12 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:14.412 19:34:12 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:14.412 19:34:12 -- host/multicontroller.sh@47 -- # waitforlisten 1278968 /var/tmp/bdevperf.sock 00:25:14.412 19:34:12 -- common/autotest_common.sh@829 -- # '[' -z 1278968 ']' 00:25:14.412 19:34:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.412 19:34:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.412 19:34:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.412 19:34:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.412 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:25:15.345 19:34:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.345 19:34:13 -- common/autotest_common.sh@862 -- # return 0 00:25:15.345 19:34:13 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:15.345 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.345 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.603 NVMe0n1 00:25:15.603 19:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.603 19:34:13 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:15.603 19:34:13 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:15.604 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 19:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.604 1 00:25:15.604 19:34:13 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:15.604 19:34:13 -- common/autotest_common.sh@650 -- # local es=0 00:25:15.604 19:34:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:15.604 19:34:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.604 19:34:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:15.604 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 request: 00:25:15.604 { 00:25:15.604 "name": "NVMe0", 00:25:15.604 "trtype": "tcp", 00:25:15.604 "traddr": "10.0.0.2", 00:25:15.604 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:15.604 "hostaddr": "10.0.0.2", 00:25:15.604 "hostsvcid": "60000", 00:25:15.604 "adrfam": "ipv4", 00:25:15.604 "trsvcid": "4420", 00:25:15.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.604 "method": "bdev_nvme_attach_controller", 00:25:15.604 "req_id": 1 00:25:15.604 } 00:25:15.604 Got JSON-RPC error response 00:25:15.604 response: 00:25:15.604 { 00:25:15.604 "code": -114, 00:25:15.604 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:15.604 } 00:25:15.604 19:34:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:15.604 19:34:13 -- common/autotest_common.sh@653 -- # es=1 00:25:15.604 19:34:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:15.604 19:34:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:15.604 19:34:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:15.604 19:34:13 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:15.604 19:34:13 -- common/autotest_common.sh@650 -- # local es=0 00:25:15.604 19:34:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:15.604 19:34:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.604 19:34:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:15.604 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 request: 00:25:15.604 { 00:25:15.604 "name": "NVMe0", 00:25:15.604 "trtype": "tcp", 00:25:15.604 "traddr": "10.0.0.2", 00:25:15.604 "hostaddr": "10.0.0.2", 00:25:15.604 "hostsvcid": "60000", 00:25:15.604 "adrfam": "ipv4", 00:25:15.604 "trsvcid": "4420", 00:25:15.604 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:15.604 "method": "bdev_nvme_attach_controller", 00:25:15.604 "req_id": 1 00:25:15.604 } 00:25:15.604 Got JSON-RPC error response 00:25:15.604 response: 00:25:15.604 { 00:25:15.604 "code": -114, 00:25:15.604 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:15.604 } 00:25:15.604 19:34:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:15.604 19:34:13 -- common/autotest_common.sh@653 -- # es=1 00:25:15.604 19:34:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:15.604 19:34:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:15.604 19:34:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:15.604 19:34:13 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@650 -- # local es=0 00:25:15.604 19:34:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.604 19:34:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 request: 00:25:15.604 { 00:25:15.604 "name": "NVMe0", 00:25:15.604 "trtype": "tcp", 00:25:15.604 "traddr": "10.0.0.2", 00:25:15.604 "hostaddr": "10.0.0.2", 00:25:15.604 "hostsvcid": "60000", 00:25:15.604 "adrfam": "ipv4", 00:25:15.604 "trsvcid": "4420", 00:25:15.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.604 "multipath": "disable", 00:25:15.604 "method": "bdev_nvme_attach_controller", 00:25:15.604 "req_id": 1 00:25:15.604 } 00:25:15.604 Got JSON-RPC error response 00:25:15.604 response: 00:25:15.604 { 00:25:15.604 "code": -114, 00:25:15.604 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:15.604 } 00:25:15.604 19:34:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:15.604 19:34:13 -- common/autotest_common.sh@653 -- # es=1 00:25:15.604 19:34:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:15.604 19:34:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:15.604 19:34:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:15.604 19:34:13 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:15.604 19:34:13 -- common/autotest_common.sh@650 -- # local es=0 00:25:15.604 19:34:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:15.604 19:34:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:15.604 19:34:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:15.604 19:34:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:15.604 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 request: 00:25:15.604 { 00:25:15.604 "name": "NVMe0", 00:25:15.604 "trtype": "tcp", 00:25:15.604 "traddr": "10.0.0.2", 00:25:15.604 "hostaddr": "10.0.0.2", 00:25:15.604 "hostsvcid": "60000", 00:25:15.604 "adrfam": "ipv4", 00:25:15.604 "trsvcid": "4420", 00:25:15.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.604 "multipath": "failover", 00:25:15.604 "method": "bdev_nvme_attach_controller", 00:25:15.604 "req_id": 1 00:25:15.604 } 00:25:15.604 Got JSON-RPC error response 00:25:15.604 response: 00:25:15.604 { 00:25:15.604 "code": -114, 00:25:15.604 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:15.604 } 00:25:15.604 19:34:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:15.604 19:34:13 -- common/autotest_common.sh@653 -- # es=1 00:25:15.604 19:34:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:15.604 19:34:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:15.604 19:34:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:15.604 19:34:13 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:15.604 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 00:25:15.604 19:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.604 19:34:13 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:15.604 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.604 19:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.604 19:34:13 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:15.604 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.604 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.862 00:25:15.862 19:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.862 19:34:13 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:15.862 19:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.862 19:34:13 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:15.862 19:34:13 -- common/autotest_common.sh@10 -- # set +x 00:25:15.862 19:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.862 19:34:13 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:15.862 19:34:13 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.796 0 00:25:16.796 19:34:15 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:16.796 19:34:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.796 19:34:15 -- common/autotest_common.sh@10 -- # set +x 00:25:17.054 19:34:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.054 19:34:15 -- host/multicontroller.sh@100 -- # killprocess 1278968 00:25:17.054 19:34:15 -- common/autotest_common.sh@936 -- # '[' -z 1278968 ']' 00:25:17.054 19:34:15 -- common/autotest_common.sh@940 -- # kill -0 1278968 00:25:17.054 19:34:15 -- common/autotest_common.sh@941 -- # uname 00:25:17.054 19:34:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:17.054 19:34:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1278968 00:25:17.054 19:34:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:17.054 19:34:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:17.054 19:34:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1278968' 00:25:17.054 killing process with pid 1278968 00:25:17.054 19:34:15 -- common/autotest_common.sh@955 -- # kill 1278968 00:25:17.054 19:34:15 -- common/autotest_common.sh@960 -- # wait 1278968 00:25:17.312 19:34:15 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.312 19:34:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.312 19:34:15 -- common/autotest_common.sh@10 -- # set +x 00:25:17.312 19:34:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.312 19:34:15 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:17.312 19:34:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.312 19:34:15 -- common/autotest_common.sh@10 -- # set +x 00:25:17.312 19:34:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.312 19:34:15 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:17.312 19:34:15 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:17.312 19:34:15 -- common/autotest_common.sh@1607 -- # read -r file 00:25:17.312 19:34:15 -- common/autotest_common.sh@1606 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:17.312 19:34:15 -- common/autotest_common.sh@1606 -- # sort -u 00:25:17.312 19:34:15 -- common/autotest_common.sh@1608 -- # cat 00:25:17.312 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:17.312 [2024-11-17 19:34:12.559642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:17.312 [2024-11-17 19:34:12.559777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278968 ] 00:25:17.312 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.312 [2024-11-17 19:34:12.619779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.312 [2024-11-17 19:34:12.704352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.312 [2024-11-17 19:34:13.914128] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 2079a50b-fccf-4d8f-ae79-c5177055ba04 already exists 00:25:17.312 [2024-11-17 19:34:13.914166] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:2079a50b-fccf-4d8f-ae79-c5177055ba04 alias for bdev NVMe1n1 00:25:17.312 [2024-11-17 19:34:13.914192] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:17.312 Running I/O for 1 seconds... 00:25:17.312 00:25:17.312 Latency(us) 00:25:17.312 [2024-11-17T18:34:15.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.312 [2024-11-17T18:34:15.579Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:17.312 NVMe0n1 : 1.01 20234.76 79.04 0.00 0.00 6316.91 2609.30 11359.57 00:25:17.312 [2024-11-17T18:34:15.579Z] =================================================================================================================== 00:25:17.312 [2024-11-17T18:34:15.579Z] Total : 20234.76 79.04 0.00 0.00 6316.91 2609.30 11359.57 00:25:17.312 Received shutdown signal, test time was about 1.000000 seconds 00:25:17.312 00:25:17.312 Latency(us) 00:25:17.312 [2024-11-17T18:34:15.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.312 [2024-11-17T18:34:15.579Z] =================================================================================================================== 00:25:17.312 [2024-11-17T18:34:15.579Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.312 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:17.312 19:34:15 -- common/autotest_common.sh@1613 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:17.312 19:34:15 -- common/autotest_common.sh@1607 -- # read -r file 00:25:17.312 19:34:15 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:17.312 19:34:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:17.312 19:34:15 -- nvmf/common.sh@116 -- # sync 00:25:17.312 19:34:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:17.312 19:34:15 -- nvmf/common.sh@119 -- # set +e 00:25:17.312 19:34:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:17.312 19:34:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:17.312 rmmod nvme_tcp 00:25:17.312 rmmod nvme_fabrics 00:25:17.312 rmmod nvme_keyring 00:25:17.312 19:34:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:17.312 19:34:15 -- nvmf/common.sh@123 -- # set -e 00:25:17.312 19:34:15 -- nvmf/common.sh@124 -- # return 0 00:25:17.312 19:34:15 -- nvmf/common.sh@477 -- # '[' -n 1278809 ']' 00:25:17.312 19:34:15 -- nvmf/common.sh@478 -- # killprocess 1278809 00:25:17.312 19:34:15 -- common/autotest_common.sh@936 -- # '[' -z 1278809 ']' 00:25:17.312 19:34:15 -- common/autotest_common.sh@940 -- # kill -0 1278809 00:25:17.312 19:34:15 -- common/autotest_common.sh@941 -- # uname 00:25:17.312 19:34:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:17.312 19:34:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1278809 00:25:17.312 19:34:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:17.312 19:34:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:17.312 19:34:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1278809' 00:25:17.312 killing process with pid 1278809 00:25:17.313 19:34:15 -- common/autotest_common.sh@955 -- # kill 1278809 00:25:17.313 19:34:15 -- common/autotest_common.sh@960 -- # wait 1278809 00:25:17.571 19:34:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:17.571 19:34:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:17.571 19:34:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:17.571 19:34:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.571 19:34:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:17.571 19:34:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.571 19:34:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.571 19:34:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.103 19:34:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:20.103 00:25:20.103 real 0m8.715s 00:25:20.103 user 0m16.456s 00:25:20.103 sys 0m2.353s 00:25:20.103 19:34:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:20.103 19:34:17 -- common/autotest_common.sh@10 -- # set +x 00:25:20.103 ************************************ 00:25:20.103 END TEST nvmf_multicontroller 00:25:20.103 ************************************ 00:25:20.103 19:34:17 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:20.103 19:34:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:20.103 19:34:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:20.103 19:34:17 -- common/autotest_common.sh@10 -- # set +x 00:25:20.103 ************************************ 00:25:20.103 START TEST nvmf_aer 00:25:20.103 ************************************ 00:25:20.103 19:34:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:20.103 * Looking for test storage... 00:25:20.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.103 19:34:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:20.103 19:34:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:20.103 19:34:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:20.103 19:34:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:20.103 19:34:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:20.103 19:34:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:20.103 19:34:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:20.103 19:34:17 -- scripts/common.sh@335 -- # IFS=.-: 00:25:20.103 19:34:17 -- scripts/common.sh@335 -- # read -ra ver1 00:25:20.103 19:34:17 -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.103 19:34:17 -- scripts/common.sh@336 -- # read -ra ver2 00:25:20.103 19:34:17 -- scripts/common.sh@337 -- # local 'op=<' 00:25:20.103 19:34:17 -- scripts/common.sh@339 -- # ver1_l=2 00:25:20.103 19:34:17 -- scripts/common.sh@340 -- # ver2_l=1 00:25:20.103 19:34:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:20.103 19:34:17 -- scripts/common.sh@343 -- # case "$op" in 00:25:20.103 19:34:17 -- scripts/common.sh@344 -- # : 1 00:25:20.103 19:34:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:20.103 19:34:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.103 19:34:17 -- scripts/common.sh@364 -- # decimal 1 00:25:20.103 19:34:17 -- scripts/common.sh@352 -- # local d=1 00:25:20.103 19:34:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.103 19:34:17 -- scripts/common.sh@354 -- # echo 1 00:25:20.103 19:34:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:20.103 19:34:17 -- scripts/common.sh@365 -- # decimal 2 00:25:20.103 19:34:17 -- scripts/common.sh@352 -- # local d=2 00:25:20.103 19:34:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.103 19:34:17 -- scripts/common.sh@354 -- # echo 2 00:25:20.103 19:34:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:20.103 19:34:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:20.103 19:34:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:20.103 19:34:17 -- scripts/common.sh@367 -- # return 0 00:25:20.103 19:34:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.103 19:34:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:20.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.103 --rc genhtml_branch_coverage=1 00:25:20.103 --rc genhtml_function_coverage=1 00:25:20.103 --rc genhtml_legend=1 00:25:20.103 --rc geninfo_all_blocks=1 00:25:20.103 --rc geninfo_unexecuted_blocks=1 00:25:20.103 00:25:20.103 ' 00:25:20.103 19:34:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:20.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.103 --rc genhtml_branch_coverage=1 00:25:20.103 --rc genhtml_function_coverage=1 00:25:20.103 --rc genhtml_legend=1 00:25:20.103 --rc geninfo_all_blocks=1 00:25:20.103 --rc geninfo_unexecuted_blocks=1 00:25:20.103 00:25:20.103 ' 00:25:20.103 19:34:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:20.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.103 --rc genhtml_branch_coverage=1 00:25:20.103 --rc genhtml_function_coverage=1 00:25:20.103 --rc genhtml_legend=1 00:25:20.103 --rc geninfo_all_blocks=1 00:25:20.103 --rc geninfo_unexecuted_blocks=1 00:25:20.103 00:25:20.103 ' 00:25:20.103 19:34:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:20.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.103 --rc genhtml_branch_coverage=1 00:25:20.103 --rc genhtml_function_coverage=1 00:25:20.103 --rc genhtml_legend=1 00:25:20.103 --rc geninfo_all_blocks=1 00:25:20.103 --rc geninfo_unexecuted_blocks=1 00:25:20.103 00:25:20.103 ' 00:25:20.103 19:34:17 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.103 19:34:17 -- nvmf/common.sh@7 -- # uname -s 00:25:20.103 19:34:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.103 19:34:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.103 19:34:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.103 19:34:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.103 19:34:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.103 19:34:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.103 19:34:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.103 19:34:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.103 19:34:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.103 19:34:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.103 19:34:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.103 19:34:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.103 19:34:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.103 19:34:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.103 19:34:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.103 19:34:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.103 19:34:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.103 19:34:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.103 19:34:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.103 19:34:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.103 19:34:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.103 19:34:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.103 19:34:17 -- paths/export.sh@5 -- # export PATH 00:25:20.104 19:34:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.104 19:34:17 -- nvmf/common.sh@46 -- # : 0 00:25:20.104 19:34:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:20.104 19:34:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:20.104 19:34:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:20.104 19:34:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.104 19:34:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.104 19:34:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:20.104 19:34:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:20.104 19:34:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:20.104 19:34:17 -- host/aer.sh@11 -- # nvmftestinit 00:25:20.104 19:34:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:20.104 19:34:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.104 19:34:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:20.104 19:34:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:20.104 19:34:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:20.104 19:34:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.104 19:34:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.104 19:34:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.104 19:34:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:20.104 19:34:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:20.104 19:34:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:20.104 19:34:17 -- common/autotest_common.sh@10 -- # set +x 00:25:22.009 19:34:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:22.009 19:34:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:22.009 19:34:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:22.009 19:34:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:22.009 19:34:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:22.009 19:34:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:22.009 19:34:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:22.009 19:34:20 -- nvmf/common.sh@294 -- # net_devs=() 00:25:22.009 19:34:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:22.009 19:34:20 -- nvmf/common.sh@295 -- # e810=() 00:25:22.009 19:34:20 -- nvmf/common.sh@295 -- # local -ga e810 00:25:22.009 19:34:20 -- nvmf/common.sh@296 -- # x722=() 00:25:22.009 19:34:20 -- nvmf/common.sh@296 -- # local -ga x722 00:25:22.009 19:34:20 -- nvmf/common.sh@297 -- # mlx=() 00:25:22.009 19:34:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:22.009 19:34:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.009 19:34:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:22.009 19:34:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:22.009 19:34:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:22.009 19:34:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:22.009 19:34:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:22.009 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:22.009 19:34:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:22.009 19:34:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:22.009 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:22.009 19:34:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:22.009 19:34:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:22.009 19:34:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:22.010 19:34:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:22.010 19:34:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:22.010 19:34:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.010 19:34:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:22.010 19:34:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.010 19:34:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:22.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:22.010 19:34:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.010 19:34:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:22.010 19:34:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.010 19:34:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:22.010 19:34:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.010 19:34:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:22.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:22.010 19:34:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.010 19:34:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:22.010 19:34:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:22.010 19:34:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:22.010 19:34:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:22.010 19:34:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:22.010 19:34:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.010 19:34:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.010 19:34:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.010 19:34:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:22.010 19:34:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.010 19:34:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.010 19:34:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:22.010 19:34:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.010 19:34:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.010 19:34:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:22.010 19:34:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:22.010 19:34:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.010 19:34:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.010 19:34:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.010 19:34:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.010 19:34:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:22.010 19:34:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.010 19:34:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.010 19:34:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.010 19:34:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:22.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:25:22.010 00:25:22.010 --- 10.0.0.2 ping statistics --- 00:25:22.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.010 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:25:22.010 19:34:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:25:22.010 00:25:22.010 --- 10.0.0.1 ping statistics --- 00:25:22.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.010 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:22.010 19:34:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.010 19:34:20 -- nvmf/common.sh@410 -- # return 0 00:25:22.010 19:34:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:22.010 19:34:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.010 19:34:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:22.010 19:34:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:22.010 19:34:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.010 19:34:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:22.010 19:34:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:22.010 19:34:20 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:22.010 19:34:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:22.010 19:34:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.010 19:34:20 -- common/autotest_common.sh@10 -- # set +x 00:25:22.010 19:34:20 -- nvmf/common.sh@469 -- # nvmfpid=1281210 00:25:22.010 19:34:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:22.010 19:34:20 -- nvmf/common.sh@470 -- # waitforlisten 1281210 00:25:22.010 19:34:20 -- common/autotest_common.sh@829 -- # '[' -z 1281210 ']' 00:25:22.010 19:34:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.010 19:34:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.010 19:34:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.010 19:34:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.010 19:34:20 -- common/autotest_common.sh@10 -- # set +x 00:25:22.010 [2024-11-17 19:34:20.244793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:22.010 [2024-11-17 19:34:20.244875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.268 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.268 [2024-11-17 19:34:20.318692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:22.268 [2024-11-17 19:34:20.415245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:22.268 [2024-11-17 19:34:20.415421] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.268 [2024-11-17 19:34:20.415442] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.268 [2024-11-17 19:34:20.415457] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.268 [2024-11-17 19:34:20.415528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.268 [2024-11-17 19:34:20.415588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.268 [2024-11-17 19:34:20.415627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.268 [2024-11-17 19:34:20.415630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.201 19:34:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.201 19:34:21 -- common/autotest_common.sh@862 -- # return 0 00:25:23.201 19:34:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:23.201 19:34:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.201 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.201 19:34:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.201 19:34:21 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:23.201 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.201 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.201 [2024-11-17 19:34:21.281462] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.201 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.201 19:34:21 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:23.201 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.201 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.201 Malloc0 00:25:23.201 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.201 19:34:21 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:23.201 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.201 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.201 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.201 19:34:21 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:23.201 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.201 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.201 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.201 19:34:21 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.201 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.201 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.201 [2024-11-17 19:34:21.332769] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.201 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.201 19:34:21 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:23.201 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.201 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.201 [2024-11-17 19:34:21.340477] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:23.201 [ 00:25:23.201 { 00:25:23.201 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:23.201 "subtype": "Discovery", 00:25:23.201 "listen_addresses": [], 00:25:23.201 "allow_any_host": true, 00:25:23.201 "hosts": [] 00:25:23.201 }, 00:25:23.201 { 00:25:23.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.201 "subtype": "NVMe", 00:25:23.201 "listen_addresses": [ 00:25:23.201 { 00:25:23.201 "transport": "TCP", 00:25:23.201 "trtype": "TCP", 00:25:23.201 "adrfam": "IPv4", 00:25:23.201 "traddr": "10.0.0.2", 00:25:23.201 "trsvcid": "4420" 00:25:23.201 } 00:25:23.201 ], 00:25:23.201 "allow_any_host": true, 00:25:23.201 "hosts": [], 00:25:23.201 "serial_number": "SPDK00000000000001", 00:25:23.201 "model_number": "SPDK bdev Controller", 00:25:23.201 "max_namespaces": 2, 00:25:23.201 "min_cntlid": 1, 00:25:23.201 "max_cntlid": 65519, 00:25:23.201 "namespaces": [ 00:25:23.201 { 00:25:23.201 "nsid": 1, 00:25:23.201 "bdev_name": "Malloc0", 00:25:23.201 "name": "Malloc0", 00:25:23.201 "nguid": "E230850E81F64052982FCC98C92E9C9F", 00:25:23.201 "uuid": "e230850e-81f6-4052-982f-cc98c92e9c9f" 00:25:23.201 } 00:25:23.201 ] 00:25:23.201 } 00:25:23.201 ] 00:25:23.201 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.201 19:34:21 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:23.201 19:34:21 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:23.201 19:34:21 -- host/aer.sh@33 -- # aerpid=1281375 00:25:23.201 19:34:21 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:23.201 19:34:21 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:23.201 19:34:21 -- common/autotest_common.sh@1254 -- # local i=0 00:25:23.201 19:34:21 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:23.201 19:34:21 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:25:23.201 19:34:21 -- common/autotest_common.sh@1257 -- # i=1 00:25:23.201 19:34:21 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:25:23.201 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.201 19:34:21 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:23.201 19:34:21 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:25:23.201 19:34:21 -- common/autotest_common.sh@1257 -- # i=2 00:25:23.201 19:34:21 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:25:23.459 19:34:21 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:23.459 19:34:21 -- common/autotest_common.sh@1256 -- # '[' 2 -lt 200 ']' 00:25:23.459 19:34:21 -- common/autotest_common.sh@1257 -- # i=3 00:25:23.459 19:34:21 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:25:23.459 19:34:21 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:23.459 19:34:21 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:23.459 19:34:21 -- common/autotest_common.sh@1265 -- # return 0 00:25:23.459 19:34:21 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:23.459 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.459 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.459 Malloc1 00:25:23.459 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.459 19:34:21 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:23.459 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.459 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.459 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.459 19:34:21 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:23.459 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.459 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.459 Asynchronous Event Request test 00:25:23.459 Attaching to 10.0.0.2 00:25:23.459 Attached to 10.0.0.2 00:25:23.459 Registering asynchronous event callbacks... 00:25:23.459 Starting namespace attribute notice tests for all controllers... 00:25:23.459 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:23.459 aer_cb - Changed Namespace 00:25:23.459 Cleaning up... 00:25:23.459 [ 00:25:23.459 { 00:25:23.459 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:23.459 "subtype": "Discovery", 00:25:23.459 "listen_addresses": [], 00:25:23.459 "allow_any_host": true, 00:25:23.459 "hosts": [] 00:25:23.459 }, 00:25:23.459 { 00:25:23.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.717 "subtype": "NVMe", 00:25:23.717 "listen_addresses": [ 00:25:23.717 { 00:25:23.717 "transport": "TCP", 00:25:23.717 "trtype": "TCP", 00:25:23.717 "adrfam": "IPv4", 00:25:23.717 "traddr": "10.0.0.2", 00:25:23.717 "trsvcid": "4420" 00:25:23.717 } 00:25:23.717 ], 00:25:23.717 "allow_any_host": true, 00:25:23.717 "hosts": [], 00:25:23.717 "serial_number": "SPDK00000000000001", 00:25:23.717 "model_number": "SPDK bdev Controller", 00:25:23.717 "max_namespaces": 2, 00:25:23.717 "min_cntlid": 1, 00:25:23.717 "max_cntlid": 65519, 00:25:23.717 "namespaces": [ 00:25:23.717 { 00:25:23.717 "nsid": 1, 00:25:23.717 "bdev_name": "Malloc0", 00:25:23.717 "name": "Malloc0", 00:25:23.717 "nguid": "E230850E81F64052982FCC98C92E9C9F", 00:25:23.717 "uuid": "e230850e-81f6-4052-982f-cc98c92e9c9f" 00:25:23.717 }, 00:25:23.717 { 00:25:23.717 "nsid": 2, 00:25:23.717 "bdev_name": "Malloc1", 00:25:23.717 "name": "Malloc1", 00:25:23.717 "nguid": "B135433A1CD544898C494CD10ABB1FAC", 00:25:23.717 "uuid": "b135433a-1cd5-4489-8c49-4cd10abb1fac" 00:25:23.717 } 00:25:23.717 ] 00:25:23.717 } 00:25:23.717 ] 00:25:23.717 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.717 19:34:21 -- host/aer.sh@43 -- # wait 1281375 00:25:23.717 19:34:21 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:23.717 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.717 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.717 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.717 19:34:21 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:23.717 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.717 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.717 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.717 19:34:21 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:23.717 19:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.717 19:34:21 -- common/autotest_common.sh@10 -- # set +x 00:25:23.717 19:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.717 19:34:21 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:23.717 19:34:21 -- host/aer.sh@51 -- # nvmftestfini 00:25:23.717 19:34:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:23.717 19:34:21 -- nvmf/common.sh@116 -- # sync 00:25:23.717 19:34:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:23.717 19:34:21 -- nvmf/common.sh@119 -- # set +e 00:25:23.717 19:34:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:23.717 19:34:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:23.717 rmmod nvme_tcp 00:25:23.717 rmmod nvme_fabrics 00:25:23.717 rmmod nvme_keyring 00:25:23.717 19:34:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:23.717 19:34:21 -- nvmf/common.sh@123 -- # set -e 00:25:23.717 19:34:21 -- nvmf/common.sh@124 -- # return 0 00:25:23.717 19:34:21 -- nvmf/common.sh@477 -- # '[' -n 1281210 ']' 00:25:23.717 19:34:21 -- nvmf/common.sh@478 -- # killprocess 1281210 00:25:23.717 19:34:21 -- common/autotest_common.sh@936 -- # '[' -z 1281210 ']' 00:25:23.717 19:34:21 -- common/autotest_common.sh@940 -- # kill -0 1281210 00:25:23.717 19:34:21 -- common/autotest_common.sh@941 -- # uname 00:25:23.717 19:34:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:23.717 19:34:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1281210 00:25:23.717 19:34:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:23.717 19:34:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:23.717 19:34:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1281210' 00:25:23.717 killing process with pid 1281210 00:25:23.717 19:34:21 -- common/autotest_common.sh@955 -- # kill 1281210 00:25:23.717 [2024-11-17 19:34:21.880356] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:23.717 19:34:21 -- common/autotest_common.sh@960 -- # wait 1281210 00:25:23.976 19:34:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:23.976 19:34:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:23.976 19:34:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:23.976 19:34:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:23.976 19:34:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:23.976 19:34:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.976 19:34:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.976 19:34:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.509 19:34:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:26.509 00:25:26.509 real 0m6.355s 00:25:26.509 user 0m7.747s 00:25:26.509 sys 0m2.033s 00:25:26.509 19:34:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:26.509 19:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:26.509 ************************************ 00:25:26.509 END TEST nvmf_aer 00:25:26.509 ************************************ 00:25:26.509 19:34:24 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:26.509 19:34:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:26.509 19:34:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:26.509 19:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:26.509 ************************************ 00:25:26.509 START TEST nvmf_async_init 00:25:26.509 ************************************ 00:25:26.509 19:34:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:26.509 * Looking for test storage... 00:25:26.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.509 19:34:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:26.509 19:34:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:26.509 19:34:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:26.509 19:34:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:26.509 19:34:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:26.509 19:34:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:26.509 19:34:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:26.509 19:34:24 -- scripts/common.sh@335 -- # IFS=.-: 00:25:26.509 19:34:24 -- scripts/common.sh@335 -- # read -ra ver1 00:25:26.509 19:34:24 -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.509 19:34:24 -- scripts/common.sh@336 -- # read -ra ver2 00:25:26.509 19:34:24 -- scripts/common.sh@337 -- # local 'op=<' 00:25:26.509 19:34:24 -- scripts/common.sh@339 -- # ver1_l=2 00:25:26.509 19:34:24 -- scripts/common.sh@340 -- # ver2_l=1 00:25:26.509 19:34:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:26.509 19:34:24 -- scripts/common.sh@343 -- # case "$op" in 00:25:26.509 19:34:24 -- scripts/common.sh@344 -- # : 1 00:25:26.509 19:34:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:26.509 19:34:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.509 19:34:24 -- scripts/common.sh@364 -- # decimal 1 00:25:26.509 19:34:24 -- scripts/common.sh@352 -- # local d=1 00:25:26.509 19:34:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.509 19:34:24 -- scripts/common.sh@354 -- # echo 1 00:25:26.509 19:34:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:26.509 19:34:24 -- scripts/common.sh@365 -- # decimal 2 00:25:26.509 19:34:24 -- scripts/common.sh@352 -- # local d=2 00:25:26.509 19:34:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.509 19:34:24 -- scripts/common.sh@354 -- # echo 2 00:25:26.509 19:34:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:26.509 19:34:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:26.509 19:34:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:26.509 19:34:24 -- scripts/common.sh@367 -- # return 0 00:25:26.509 19:34:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.509 19:34:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:26.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.509 --rc genhtml_branch_coverage=1 00:25:26.509 --rc genhtml_function_coverage=1 00:25:26.509 --rc genhtml_legend=1 00:25:26.509 --rc geninfo_all_blocks=1 00:25:26.509 --rc geninfo_unexecuted_blocks=1 00:25:26.509 00:25:26.509 ' 00:25:26.509 19:34:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:26.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.509 --rc genhtml_branch_coverage=1 00:25:26.509 --rc genhtml_function_coverage=1 00:25:26.509 --rc genhtml_legend=1 00:25:26.509 --rc geninfo_all_blocks=1 00:25:26.509 --rc geninfo_unexecuted_blocks=1 00:25:26.509 00:25:26.509 ' 00:25:26.509 19:34:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:26.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.509 --rc genhtml_branch_coverage=1 00:25:26.509 --rc genhtml_function_coverage=1 00:25:26.509 --rc genhtml_legend=1 00:25:26.509 --rc geninfo_all_blocks=1 00:25:26.509 --rc geninfo_unexecuted_blocks=1 00:25:26.509 00:25:26.509 ' 00:25:26.509 19:34:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:26.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.509 --rc genhtml_branch_coverage=1 00:25:26.509 --rc genhtml_function_coverage=1 00:25:26.509 --rc genhtml_legend=1 00:25:26.509 --rc geninfo_all_blocks=1 00:25:26.509 --rc geninfo_unexecuted_blocks=1 00:25:26.509 00:25:26.509 ' 00:25:26.509 19:34:24 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.509 19:34:24 -- nvmf/common.sh@7 -- # uname -s 00:25:26.509 19:34:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.509 19:34:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.509 19:34:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.509 19:34:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.509 19:34:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.509 19:34:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.509 19:34:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.509 19:34:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.509 19:34:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.509 19:34:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.509 19:34:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.509 19:34:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.509 19:34:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.509 19:34:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.509 19:34:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.509 19:34:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.509 19:34:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.509 19:34:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.509 19:34:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.509 19:34:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.509 19:34:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.509 19:34:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.509 19:34:24 -- paths/export.sh@5 -- # export PATH 00:25:26.510 19:34:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.510 19:34:24 -- nvmf/common.sh@46 -- # : 0 00:25:26.510 19:34:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:26.510 19:34:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:26.510 19:34:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:26.510 19:34:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.510 19:34:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.510 19:34:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:26.510 19:34:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:26.510 19:34:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:26.510 19:34:24 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:26.510 19:34:24 -- host/async_init.sh@14 -- # null_block_size=512 00:25:26.510 19:34:24 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:26.510 19:34:24 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:26.510 19:34:24 -- host/async_init.sh@20 -- # uuidgen 00:25:26.510 19:34:24 -- host/async_init.sh@20 -- # tr -d - 00:25:26.510 19:34:24 -- host/async_init.sh@20 -- # nguid=4d95b5d95e734f11a8e606c0ffd7c33b 00:25:26.510 19:34:24 -- host/async_init.sh@22 -- # nvmftestinit 00:25:26.510 19:34:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:26.510 19:34:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.510 19:34:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:26.510 19:34:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:26.510 19:34:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:26.510 19:34:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.510 19:34:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.510 19:34:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.510 19:34:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:26.510 19:34:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:26.510 19:34:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:26.510 19:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:28.412 19:34:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:28.412 19:34:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:28.412 19:34:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:28.412 19:34:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:28.412 19:34:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:28.412 19:34:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:28.412 19:34:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:28.412 19:34:26 -- nvmf/common.sh@294 -- # net_devs=() 00:25:28.412 19:34:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:28.412 19:34:26 -- nvmf/common.sh@295 -- # e810=() 00:25:28.412 19:34:26 -- nvmf/common.sh@295 -- # local -ga e810 00:25:28.412 19:34:26 -- nvmf/common.sh@296 -- # x722=() 00:25:28.412 19:34:26 -- nvmf/common.sh@296 -- # local -ga x722 00:25:28.412 19:34:26 -- nvmf/common.sh@297 -- # mlx=() 00:25:28.412 19:34:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:28.412 19:34:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.412 19:34:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:28.412 19:34:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:28.412 19:34:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:28.412 19:34:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:28.412 19:34:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:28.412 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:28.412 19:34:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:28.412 19:34:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:28.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:28.412 19:34:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:28.412 19:34:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:28.412 19:34:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.412 19:34:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:28.412 19:34:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.412 19:34:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:28.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:28.412 19:34:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.412 19:34:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:28.412 19:34:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.412 19:34:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:28.412 19:34:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.412 19:34:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:28.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:28.412 19:34:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.412 19:34:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:28.412 19:34:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:28.412 19:34:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:28.412 19:34:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:28.412 19:34:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.412 19:34:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.412 19:34:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.412 19:34:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:28.412 19:34:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.412 19:34:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.412 19:34:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:28.412 19:34:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.412 19:34:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.412 19:34:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:28.412 19:34:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:28.412 19:34:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.412 19:34:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.412 19:34:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.412 19:34:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.412 19:34:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:28.412 19:34:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.413 19:34:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.413 19:34:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.413 19:34:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:28.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:25:28.413 00:25:28.413 --- 10.0.0.2 ping statistics --- 00:25:28.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.413 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:25:28.413 19:34:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:25:28.413 00:25:28.413 --- 10.0.0.1 ping statistics --- 00:25:28.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.413 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:28.413 19:34:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.413 19:34:26 -- nvmf/common.sh@410 -- # return 0 00:25:28.413 19:34:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:28.413 19:34:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.413 19:34:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:28.413 19:34:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:28.413 19:34:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.413 19:34:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:28.413 19:34:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:28.413 19:34:26 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:28.413 19:34:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:28.413 19:34:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:28.413 19:34:26 -- common/autotest_common.sh@10 -- # set +x 00:25:28.413 19:34:26 -- nvmf/common.sh@469 -- # nvmfpid=1283452 00:25:28.413 19:34:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:28.413 19:34:26 -- nvmf/common.sh@470 -- # waitforlisten 1283452 00:25:28.413 19:34:26 -- common/autotest_common.sh@829 -- # '[' -z 1283452 ']' 00:25:28.413 19:34:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.413 19:34:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.413 19:34:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.413 19:34:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.413 19:34:26 -- common/autotest_common.sh@10 -- # set +x 00:25:28.413 [2024-11-17 19:34:26.533153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:28.413 [2024-11-17 19:34:26.533237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.413 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.413 [2024-11-17 19:34:26.601618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.671 [2024-11-17 19:34:26.690496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:28.671 [2024-11-17 19:34:26.690682] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.671 [2024-11-17 19:34:26.690704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.671 [2024-11-17 19:34:26.690719] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.671 [2024-11-17 19:34:26.690751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.237 19:34:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.237 19:34:27 -- common/autotest_common.sh@862 -- # return 0 00:25:29.237 19:34:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:29.237 19:34:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.237 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.495 19:34:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.495 19:34:27 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:29.495 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.495 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.495 [2024-11-17 19:34:27.520089] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.495 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.495 19:34:27 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:29.495 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.495 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.495 null0 00:25:29.495 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.495 19:34:27 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:29.495 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.495 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.495 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.495 19:34:27 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:29.495 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.495 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.495 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.495 19:34:27 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4d95b5d95e734f11a8e606c0ffd7c33b 00:25:29.495 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.495 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.495 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.495 19:34:27 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.495 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.495 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.495 [2024-11-17 19:34:27.560347] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.495 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.495 19:34:27 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:29.495 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.495 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.753 nvme0n1 00:25:29.753 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.753 19:34:27 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:29.753 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.753 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.753 [ 00:25:29.753 { 00:25:29.753 "name": "nvme0n1", 00:25:29.753 "aliases": [ 00:25:29.753 "4d95b5d9-5e73-4f11-a8e6-06c0ffd7c33b" 00:25:29.753 ], 00:25:29.753 "product_name": "NVMe disk", 00:25:29.753 "block_size": 512, 00:25:29.753 "num_blocks": 2097152, 00:25:29.753 "uuid": "4d95b5d9-5e73-4f11-a8e6-06c0ffd7c33b", 00:25:29.753 "assigned_rate_limits": { 00:25:29.753 "rw_ios_per_sec": 0, 00:25:29.753 "rw_mbytes_per_sec": 0, 00:25:29.753 "r_mbytes_per_sec": 0, 00:25:29.753 "w_mbytes_per_sec": 0 00:25:29.753 }, 00:25:29.753 "claimed": false, 00:25:29.753 "zoned": false, 00:25:29.753 "supported_io_types": { 00:25:29.753 "read": true, 00:25:29.753 "write": true, 00:25:29.753 "unmap": false, 00:25:29.753 "write_zeroes": true, 00:25:29.753 "flush": true, 00:25:29.753 "reset": true, 00:25:29.753 "compare": true, 00:25:29.753 "compare_and_write": true, 00:25:29.753 "abort": true, 00:25:29.753 "nvme_admin": true, 00:25:29.753 "nvme_io": true 00:25:29.753 }, 00:25:29.753 "driver_specific": { 00:25:29.753 "nvme": [ 00:25:29.753 { 00:25:29.753 "trid": { 00:25:29.753 "trtype": "TCP", 00:25:29.753 "adrfam": "IPv4", 00:25:29.753 "traddr": "10.0.0.2", 00:25:29.753 "trsvcid": "4420", 00:25:29.753 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:29.753 }, 00:25:29.753 "ctrlr_data": { 00:25:29.753 "cntlid": 1, 00:25:29.753 "vendor_id": "0x8086", 00:25:29.753 "model_number": "SPDK bdev Controller", 00:25:29.753 "serial_number": "00000000000000000000", 00:25:29.753 "firmware_revision": "24.01.1", 00:25:29.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:29.753 "oacs": { 00:25:29.753 "security": 0, 00:25:29.753 "format": 0, 00:25:29.753 "firmware": 0, 00:25:29.753 "ns_manage": 0 00:25:29.753 }, 00:25:29.753 "multi_ctrlr": true, 00:25:29.753 "ana_reporting": false 00:25:29.753 }, 00:25:29.753 "vs": { 00:25:29.753 "nvme_version": "1.3" 00:25:29.753 }, 00:25:29.753 "ns_data": { 00:25:29.753 "id": 1, 00:25:29.753 "can_share": true 00:25:29.753 } 00:25:29.753 } 00:25:29.753 ], 00:25:29.753 "mp_policy": "active_passive" 00:25:29.753 } 00:25:29.753 } 00:25:29.753 ] 00:25:29.753 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.753 19:34:27 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:29.753 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.753 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.753 [2024-11-17 19:34:27.812996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:29.753 [2024-11-17 19:34:27.813095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221dba0 (9): Bad file descriptor 00:25:29.753 [2024-11-17 19:34:27.954827] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:29.753 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.753 19:34:27 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:29.754 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.754 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.754 [ 00:25:29.754 { 00:25:29.754 "name": "nvme0n1", 00:25:29.754 "aliases": [ 00:25:29.754 "4d95b5d9-5e73-4f11-a8e6-06c0ffd7c33b" 00:25:29.754 ], 00:25:29.754 "product_name": "NVMe disk", 00:25:29.754 "block_size": 512, 00:25:29.754 "num_blocks": 2097152, 00:25:29.754 "uuid": "4d95b5d9-5e73-4f11-a8e6-06c0ffd7c33b", 00:25:29.754 "assigned_rate_limits": { 00:25:29.754 "rw_ios_per_sec": 0, 00:25:29.754 "rw_mbytes_per_sec": 0, 00:25:29.754 "r_mbytes_per_sec": 0, 00:25:29.754 "w_mbytes_per_sec": 0 00:25:29.754 }, 00:25:29.754 "claimed": false, 00:25:29.754 "zoned": false, 00:25:29.754 "supported_io_types": { 00:25:29.754 "read": true, 00:25:29.754 "write": true, 00:25:29.754 "unmap": false, 00:25:29.754 "write_zeroes": true, 00:25:29.754 "flush": true, 00:25:29.754 "reset": true, 00:25:29.754 "compare": true, 00:25:29.754 "compare_and_write": true, 00:25:29.754 "abort": true, 00:25:29.754 "nvme_admin": true, 00:25:29.754 "nvme_io": true 00:25:29.754 }, 00:25:29.754 "driver_specific": { 00:25:29.754 "nvme": [ 00:25:29.754 { 00:25:29.754 "trid": { 00:25:29.754 "trtype": "TCP", 00:25:29.754 "adrfam": "IPv4", 00:25:29.754 "traddr": "10.0.0.2", 00:25:29.754 "trsvcid": "4420", 00:25:29.754 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:29.754 }, 00:25:29.754 "ctrlr_data": { 00:25:29.754 "cntlid": 2, 00:25:29.754 "vendor_id": "0x8086", 00:25:29.754 "model_number": "SPDK bdev Controller", 00:25:29.754 "serial_number": "00000000000000000000", 00:25:29.754 "firmware_revision": "24.01.1", 00:25:29.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:29.754 "oacs": { 00:25:29.754 "security": 0, 00:25:29.754 "format": 0, 00:25:29.754 "firmware": 0, 00:25:29.754 "ns_manage": 0 00:25:29.754 }, 00:25:29.754 "multi_ctrlr": true, 00:25:29.754 "ana_reporting": false 00:25:29.754 }, 00:25:29.754 "vs": { 00:25:29.754 "nvme_version": "1.3" 00:25:29.754 }, 00:25:29.754 "ns_data": { 00:25:29.754 "id": 1, 00:25:29.754 "can_share": true 00:25:29.754 } 00:25:29.754 } 00:25:29.754 ], 00:25:29.754 "mp_policy": "active_passive" 00:25:29.754 } 00:25:29.754 } 00:25:29.754 ] 00:25:29.754 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.754 19:34:27 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.754 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.754 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.754 19:34:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.754 19:34:27 -- host/async_init.sh@53 -- # mktemp 00:25:29.754 19:34:27 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.v8HNernHUf 00:25:29.754 19:34:27 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:29.754 19:34:27 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.v8HNernHUf 00:25:29.754 19:34:27 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:29.754 19:34:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.754 19:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:29.754 19:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.754 19:34:28 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:29.754 19:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.754 19:34:28 -- common/autotest_common.sh@10 -- # set +x 00:25:29.754 [2024-11-17 19:34:28.009651] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:29.754 [2024-11-17 19:34:28.009802] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:29.754 19:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.754 19:34:28 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v8HNernHUf 00:25:29.754 19:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.754 19:34:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.012 19:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.012 19:34:28 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v8HNernHUf 00:25:30.012 19:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.012 19:34:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.012 [2024-11-17 19:34:28.025691] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:30.012 nvme0n1 00:25:30.012 19:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.012 19:34:28 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:30.012 19:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.012 19:34:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.012 [ 00:25:30.012 { 00:25:30.012 "name": "nvme0n1", 00:25:30.012 "aliases": [ 00:25:30.012 "4d95b5d9-5e73-4f11-a8e6-06c0ffd7c33b" 00:25:30.012 ], 00:25:30.012 "product_name": "NVMe disk", 00:25:30.012 "block_size": 512, 00:25:30.012 "num_blocks": 2097152, 00:25:30.012 "uuid": "4d95b5d9-5e73-4f11-a8e6-06c0ffd7c33b", 00:25:30.012 "assigned_rate_limits": { 00:25:30.012 "rw_ios_per_sec": 0, 00:25:30.012 "rw_mbytes_per_sec": 0, 00:25:30.012 "r_mbytes_per_sec": 0, 00:25:30.012 "w_mbytes_per_sec": 0 00:25:30.012 }, 00:25:30.012 "claimed": false, 00:25:30.012 "zoned": false, 00:25:30.012 "supported_io_types": { 00:25:30.012 "read": true, 00:25:30.012 "write": true, 00:25:30.012 "unmap": false, 00:25:30.012 "write_zeroes": true, 00:25:30.012 "flush": true, 00:25:30.012 "reset": true, 00:25:30.012 "compare": true, 00:25:30.012 "compare_and_write": true, 00:25:30.012 "abort": true, 00:25:30.012 "nvme_admin": true, 00:25:30.012 "nvme_io": true 00:25:30.012 }, 00:25:30.012 "driver_specific": { 00:25:30.012 "nvme": [ 00:25:30.012 { 00:25:30.012 "trid": { 00:25:30.012 "trtype": "TCP", 00:25:30.012 "adrfam": "IPv4", 00:25:30.012 "traddr": "10.0.0.2", 00:25:30.012 "trsvcid": "4421", 00:25:30.012 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:30.012 }, 00:25:30.012 "ctrlr_data": { 00:25:30.012 "cntlid": 3, 00:25:30.012 "vendor_id": "0x8086", 00:25:30.012 "model_number": "SPDK bdev Controller", 00:25:30.012 "serial_number": "00000000000000000000", 00:25:30.012 "firmware_revision": "24.01.1", 00:25:30.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:30.012 "oacs": { 00:25:30.012 "security": 0, 00:25:30.012 "format": 0, 00:25:30.012 "firmware": 0, 00:25:30.012 "ns_manage": 0 00:25:30.012 }, 00:25:30.012 "multi_ctrlr": true, 00:25:30.012 "ana_reporting": false 00:25:30.012 }, 00:25:30.012 "vs": { 00:25:30.012 "nvme_version": "1.3" 00:25:30.012 }, 00:25:30.012 "ns_data": { 00:25:30.012 "id": 1, 00:25:30.012 "can_share": true 00:25:30.012 } 00:25:30.012 } 00:25:30.012 ], 00:25:30.012 "mp_policy": "active_passive" 00:25:30.012 } 00:25:30.012 } 00:25:30.012 ] 00:25:30.012 19:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.012 19:34:28 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.013 19:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.013 19:34:28 -- common/autotest_common.sh@10 -- # set +x 00:25:30.013 19:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.013 19:34:28 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.v8HNernHUf 00:25:30.013 19:34:28 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:30.013 19:34:28 -- host/async_init.sh@78 -- # nvmftestfini 00:25:30.013 19:34:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:30.013 19:34:28 -- nvmf/common.sh@116 -- # sync 00:25:30.013 19:34:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:30.013 19:34:28 -- nvmf/common.sh@119 -- # set +e 00:25:30.013 19:34:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:30.013 19:34:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:30.013 rmmod nvme_tcp 00:25:30.013 rmmod nvme_fabrics 00:25:30.013 rmmod nvme_keyring 00:25:30.013 19:34:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:30.013 19:34:28 -- nvmf/common.sh@123 -- # set -e 00:25:30.013 19:34:28 -- nvmf/common.sh@124 -- # return 0 00:25:30.013 19:34:28 -- nvmf/common.sh@477 -- # '[' -n 1283452 ']' 00:25:30.013 19:34:28 -- nvmf/common.sh@478 -- # killprocess 1283452 00:25:30.013 19:34:28 -- common/autotest_common.sh@936 -- # '[' -z 1283452 ']' 00:25:30.013 19:34:28 -- common/autotest_common.sh@940 -- # kill -0 1283452 00:25:30.013 19:34:28 -- common/autotest_common.sh@941 -- # uname 00:25:30.013 19:34:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.013 19:34:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1283452 00:25:30.013 19:34:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:30.013 19:34:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:30.013 19:34:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1283452' 00:25:30.013 killing process with pid 1283452 00:25:30.013 19:34:28 -- common/autotest_common.sh@955 -- # kill 1283452 00:25:30.013 19:34:28 -- common/autotest_common.sh@960 -- # wait 1283452 00:25:30.271 19:34:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:30.271 19:34:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:30.271 19:34:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:30.271 19:34:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.271 19:34:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:30.271 19:34:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.271 19:34:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.271 19:34:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.804 19:34:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:32.804 00:25:32.804 real 0m6.279s 00:25:32.804 user 0m3.043s 00:25:32.804 sys 0m1.893s 00:25:32.804 19:34:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:32.804 19:34:30 -- common/autotest_common.sh@10 -- # set +x 00:25:32.804 ************************************ 00:25:32.804 END TEST nvmf_async_init 00:25:32.804 ************************************ 00:25:32.804 19:34:30 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:32.804 19:34:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:32.804 19:34:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:32.804 19:34:30 -- common/autotest_common.sh@10 -- # set +x 00:25:32.804 ************************************ 00:25:32.804 START TEST dma 00:25:32.804 ************************************ 00:25:32.804 19:34:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:32.804 * Looking for test storage... 00:25:32.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.804 19:34:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:32.804 19:34:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:32.804 19:34:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:32.804 19:34:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:32.804 19:34:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:32.804 19:34:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:32.804 19:34:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:32.804 19:34:30 -- scripts/common.sh@335 -- # IFS=.-: 00:25:32.804 19:34:30 -- scripts/common.sh@335 -- # read -ra ver1 00:25:32.804 19:34:30 -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.804 19:34:30 -- scripts/common.sh@336 -- # read -ra ver2 00:25:32.804 19:34:30 -- scripts/common.sh@337 -- # local 'op=<' 00:25:32.804 19:34:30 -- scripts/common.sh@339 -- # ver1_l=2 00:25:32.804 19:34:30 -- scripts/common.sh@340 -- # ver2_l=1 00:25:32.804 19:34:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:32.804 19:34:30 -- scripts/common.sh@343 -- # case "$op" in 00:25:32.804 19:34:30 -- scripts/common.sh@344 -- # : 1 00:25:32.804 19:34:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:32.804 19:34:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.804 19:34:30 -- scripts/common.sh@364 -- # decimal 1 00:25:32.804 19:34:30 -- scripts/common.sh@352 -- # local d=1 00:25:32.804 19:34:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.804 19:34:30 -- scripts/common.sh@354 -- # echo 1 00:25:32.804 19:34:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:32.804 19:34:30 -- scripts/common.sh@365 -- # decimal 2 00:25:32.804 19:34:30 -- scripts/common.sh@352 -- # local d=2 00:25:32.804 19:34:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.804 19:34:30 -- scripts/common.sh@354 -- # echo 2 00:25:32.804 19:34:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:32.804 19:34:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:32.804 19:34:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:32.804 19:34:30 -- scripts/common.sh@367 -- # return 0 00:25:32.804 19:34:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.804 19:34:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:32.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.804 --rc genhtml_branch_coverage=1 00:25:32.804 --rc genhtml_function_coverage=1 00:25:32.804 --rc genhtml_legend=1 00:25:32.804 --rc geninfo_all_blocks=1 00:25:32.804 --rc geninfo_unexecuted_blocks=1 00:25:32.804 00:25:32.804 ' 00:25:32.804 19:34:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:32.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.804 --rc genhtml_branch_coverage=1 00:25:32.804 --rc genhtml_function_coverage=1 00:25:32.804 --rc genhtml_legend=1 00:25:32.804 --rc geninfo_all_blocks=1 00:25:32.804 --rc geninfo_unexecuted_blocks=1 00:25:32.804 00:25:32.804 ' 00:25:32.804 19:34:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:32.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.804 --rc genhtml_branch_coverage=1 00:25:32.804 --rc genhtml_function_coverage=1 00:25:32.804 --rc genhtml_legend=1 00:25:32.804 --rc geninfo_all_blocks=1 00:25:32.804 --rc geninfo_unexecuted_blocks=1 00:25:32.804 00:25:32.804 ' 00:25:32.804 19:34:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:32.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.804 --rc genhtml_branch_coverage=1 00:25:32.804 --rc genhtml_function_coverage=1 00:25:32.804 --rc genhtml_legend=1 00:25:32.804 --rc geninfo_all_blocks=1 00:25:32.804 --rc geninfo_unexecuted_blocks=1 00:25:32.804 00:25:32.804 ' 00:25:32.804 19:34:30 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.804 19:34:30 -- nvmf/common.sh@7 -- # uname -s 00:25:32.804 19:34:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.804 19:34:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.804 19:34:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.804 19:34:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.804 19:34:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.804 19:34:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.804 19:34:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.804 19:34:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.804 19:34:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.805 19:34:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.805 19:34:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.805 19:34:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.805 19:34:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.805 19:34:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.805 19:34:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.805 19:34:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.805 19:34:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.805 19:34:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.805 19:34:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.805 19:34:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.805 19:34:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.805 19:34:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.805 19:34:30 -- paths/export.sh@5 -- # export PATH 00:25:32.805 19:34:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.805 19:34:30 -- nvmf/common.sh@46 -- # : 0 00:25:32.805 19:34:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:32.805 19:34:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:32.805 19:34:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:32.805 19:34:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.805 19:34:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.805 19:34:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:32.805 19:34:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:32.805 19:34:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:32.805 19:34:30 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:32.805 19:34:30 -- host/dma.sh@13 -- # exit 0 00:25:32.805 00:25:32.805 real 0m0.143s 00:25:32.805 user 0m0.100s 00:25:32.805 sys 0m0.051s 00:25:32.805 19:34:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:32.805 19:34:30 -- common/autotest_common.sh@10 -- # set +x 00:25:32.805 ************************************ 00:25:32.805 END TEST dma 00:25:32.805 ************************************ 00:25:32.805 19:34:30 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:32.805 19:34:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:32.805 19:34:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:32.805 19:34:30 -- common/autotest_common.sh@10 -- # set +x 00:25:32.805 ************************************ 00:25:32.805 START TEST nvmf_identify 00:25:32.805 ************************************ 00:25:32.805 19:34:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:32.805 * Looking for test storage... 00:25:32.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.805 19:34:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:32.805 19:34:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:32.805 19:34:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:32.805 19:34:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:32.805 19:34:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:32.805 19:34:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:32.805 19:34:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:32.805 19:34:30 -- scripts/common.sh@335 -- # IFS=.-: 00:25:32.805 19:34:30 -- scripts/common.sh@335 -- # read -ra ver1 00:25:32.805 19:34:30 -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.805 19:34:30 -- scripts/common.sh@336 -- # read -ra ver2 00:25:32.805 19:34:30 -- scripts/common.sh@337 -- # local 'op=<' 00:25:32.805 19:34:30 -- scripts/common.sh@339 -- # ver1_l=2 00:25:32.805 19:34:30 -- scripts/common.sh@340 -- # ver2_l=1 00:25:32.805 19:34:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:32.805 19:34:30 -- scripts/common.sh@343 -- # case "$op" in 00:25:32.805 19:34:30 -- scripts/common.sh@344 -- # : 1 00:25:32.805 19:34:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:32.805 19:34:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.805 19:34:30 -- scripts/common.sh@364 -- # decimal 1 00:25:32.805 19:34:30 -- scripts/common.sh@352 -- # local d=1 00:25:32.805 19:34:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.805 19:34:30 -- scripts/common.sh@354 -- # echo 1 00:25:32.805 19:34:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:32.805 19:34:30 -- scripts/common.sh@365 -- # decimal 2 00:25:32.805 19:34:30 -- scripts/common.sh@352 -- # local d=2 00:25:32.805 19:34:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.805 19:34:30 -- scripts/common.sh@354 -- # echo 2 00:25:32.805 19:34:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:32.805 19:34:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:32.805 19:34:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:32.805 19:34:30 -- scripts/common.sh@367 -- # return 0 00:25:32.805 19:34:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.805 19:34:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:32.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.805 --rc genhtml_branch_coverage=1 00:25:32.805 --rc genhtml_function_coverage=1 00:25:32.805 --rc genhtml_legend=1 00:25:32.805 --rc geninfo_all_blocks=1 00:25:32.805 --rc geninfo_unexecuted_blocks=1 00:25:32.805 00:25:32.805 ' 00:25:32.805 19:34:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:32.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.805 --rc genhtml_branch_coverage=1 00:25:32.805 --rc genhtml_function_coverage=1 00:25:32.805 --rc genhtml_legend=1 00:25:32.805 --rc geninfo_all_blocks=1 00:25:32.805 --rc geninfo_unexecuted_blocks=1 00:25:32.805 00:25:32.805 ' 00:25:32.805 19:34:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:32.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.805 --rc genhtml_branch_coverage=1 00:25:32.805 --rc genhtml_function_coverage=1 00:25:32.805 --rc genhtml_legend=1 00:25:32.805 --rc geninfo_all_blocks=1 00:25:32.805 --rc geninfo_unexecuted_blocks=1 00:25:32.805 00:25:32.805 ' 00:25:32.805 19:34:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:32.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.805 --rc genhtml_branch_coverage=1 00:25:32.805 --rc genhtml_function_coverage=1 00:25:32.805 --rc genhtml_legend=1 00:25:32.805 --rc geninfo_all_blocks=1 00:25:32.805 --rc geninfo_unexecuted_blocks=1 00:25:32.805 00:25:32.805 ' 00:25:32.805 19:34:30 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.805 19:34:30 -- nvmf/common.sh@7 -- # uname -s 00:25:32.805 19:34:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.805 19:34:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.805 19:34:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.805 19:34:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.805 19:34:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.805 19:34:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.805 19:34:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.805 19:34:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.805 19:34:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.805 19:34:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.805 19:34:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.805 19:34:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.805 19:34:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.805 19:34:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.805 19:34:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.806 19:34:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.806 19:34:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.806 19:34:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.806 19:34:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.806 19:34:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.806 19:34:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.806 19:34:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.806 19:34:30 -- paths/export.sh@5 -- # export PATH 00:25:32.806 19:34:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.806 19:34:30 -- nvmf/common.sh@46 -- # : 0 00:25:32.806 19:34:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:32.806 19:34:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:32.806 19:34:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:32.806 19:34:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.806 19:34:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.806 19:34:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:32.806 19:34:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:32.806 19:34:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:32.806 19:34:30 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:32.806 19:34:30 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:32.806 19:34:30 -- host/identify.sh@14 -- # nvmftestinit 00:25:32.806 19:34:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:32.806 19:34:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.806 19:34:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:32.806 19:34:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:32.806 19:34:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:32.806 19:34:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.806 19:34:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.806 19:34:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.806 19:34:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:32.806 19:34:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:32.806 19:34:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:32.806 19:34:30 -- common/autotest_common.sh@10 -- # set +x 00:25:34.707 19:34:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:34.707 19:34:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:34.707 19:34:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:34.708 19:34:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:34.708 19:34:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:34.708 19:34:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:34.708 19:34:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:34.708 19:34:32 -- nvmf/common.sh@294 -- # net_devs=() 00:25:34.708 19:34:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:34.708 19:34:32 -- nvmf/common.sh@295 -- # e810=() 00:25:34.708 19:34:32 -- nvmf/common.sh@295 -- # local -ga e810 00:25:34.708 19:34:32 -- nvmf/common.sh@296 -- # x722=() 00:25:34.708 19:34:32 -- nvmf/common.sh@296 -- # local -ga x722 00:25:34.708 19:34:32 -- nvmf/common.sh@297 -- # mlx=() 00:25:34.708 19:34:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:34.708 19:34:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.708 19:34:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:34.708 19:34:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:34.708 19:34:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:34.708 19:34:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:34.708 19:34:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:34.708 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:34.708 19:34:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:34.708 19:34:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:34.708 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:34.708 19:34:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:34.708 19:34:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:34.708 19:34:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.708 19:34:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:34.708 19:34:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.708 19:34:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:34.708 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:34.708 19:34:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.708 19:34:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:34.708 19:34:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.708 19:34:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:34.708 19:34:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.708 19:34:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:34.708 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:34.708 19:34:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.708 19:34:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:34.708 19:34:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:34.708 19:34:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:34.708 19:34:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:34.708 19:34:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.708 19:34:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.708 19:34:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.708 19:34:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:34.708 19:34:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.708 19:34:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.708 19:34:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:34.708 19:34:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.708 19:34:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.708 19:34:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:34.708 19:34:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:34.708 19:34:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.708 19:34:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.708 19:34:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.708 19:34:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.708 19:34:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:34.708 19:34:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.967 19:34:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.967 19:34:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.967 19:34:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:34.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:25:34.967 00:25:34.967 --- 10.0.0.2 ping statistics --- 00:25:34.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.967 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:25:34.967 19:34:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:25:34.967 00:25:34.967 --- 10.0.0.1 ping statistics --- 00:25:34.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.967 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:25:34.967 19:34:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.967 19:34:32 -- nvmf/common.sh@410 -- # return 0 00:25:34.967 19:34:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:34.967 19:34:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.967 19:34:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:34.967 19:34:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:34.967 19:34:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.967 19:34:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:34.967 19:34:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:34.967 19:34:33 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:34.967 19:34:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.967 19:34:33 -- common/autotest_common.sh@10 -- # set +x 00:25:34.967 19:34:33 -- host/identify.sh@19 -- # nvmfpid=1285733 00:25:34.967 19:34:33 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:34.967 19:34:33 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.967 19:34:33 -- host/identify.sh@23 -- # waitforlisten 1285733 00:25:34.967 19:34:33 -- common/autotest_common.sh@829 -- # '[' -z 1285733 ']' 00:25:34.967 19:34:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.967 19:34:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.967 19:34:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.967 19:34:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.967 19:34:33 -- common/autotest_common.sh@10 -- # set +x 00:25:34.967 [2024-11-17 19:34:33.068070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:34.967 [2024-11-17 19:34:33.068154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.967 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.967 [2024-11-17 19:34:33.145426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:35.225 [2024-11-17 19:34:33.245627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:35.225 [2024-11-17 19:34:33.245791] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.225 [2024-11-17 19:34:33.245810] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.225 [2024-11-17 19:34:33.245841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.225 [2024-11-17 19:34:33.245905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.225 [2024-11-17 19:34:33.245934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.225 [2024-11-17 19:34:33.245964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.225 [2024-11-17 19:34:33.245966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.162 19:34:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.162 19:34:34 -- common/autotest_common.sh@862 -- # return 0 00:25:36.162 19:34:34 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.162 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.162 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.162 [2024-11-17 19:34:34.094378] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.162 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.162 19:34:34 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:36.162 19:34:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.162 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.162 19:34:34 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:36.162 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.162 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.162 Malloc0 00:25:36.162 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.162 19:34:34 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:36.162 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.162 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.162 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.162 19:34:34 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:36.162 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.162 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.162 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.162 19:34:34 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.162 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.162 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.162 [2024-11-17 19:34:34.165478] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.162 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.162 19:34:34 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:36.162 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.162 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.162 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.162 19:34:34 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:36.162 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.162 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.162 [2024-11-17 19:34:34.181263] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:36.162 [ 00:25:36.162 { 00:25:36.162 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:36.162 "subtype": "Discovery", 00:25:36.162 "listen_addresses": [ 00:25:36.162 { 00:25:36.162 "transport": "TCP", 00:25:36.162 "trtype": "TCP", 00:25:36.162 "adrfam": "IPv4", 00:25:36.162 "traddr": "10.0.0.2", 00:25:36.162 "trsvcid": "4420" 00:25:36.162 } 00:25:36.162 ], 00:25:36.162 "allow_any_host": true, 00:25:36.162 "hosts": [] 00:25:36.162 }, 00:25:36.162 { 00:25:36.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.162 "subtype": "NVMe", 00:25:36.162 "listen_addresses": [ 00:25:36.162 { 00:25:36.162 "transport": "TCP", 00:25:36.162 "trtype": "TCP", 00:25:36.162 "adrfam": "IPv4", 00:25:36.162 "traddr": "10.0.0.2", 00:25:36.162 "trsvcid": "4420" 00:25:36.162 } 00:25:36.162 ], 00:25:36.162 "allow_any_host": true, 00:25:36.162 "hosts": [], 00:25:36.162 "serial_number": "SPDK00000000000001", 00:25:36.162 "model_number": "SPDK bdev Controller", 00:25:36.162 "max_namespaces": 32, 00:25:36.162 "min_cntlid": 1, 00:25:36.162 "max_cntlid": 65519, 00:25:36.162 "namespaces": [ 00:25:36.162 { 00:25:36.162 "nsid": 1, 00:25:36.162 "bdev_name": "Malloc0", 00:25:36.162 "name": "Malloc0", 00:25:36.162 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:36.162 "eui64": "ABCDEF0123456789", 00:25:36.162 "uuid": "37e0ad51-57f5-4159-ba52-7ac97c0761f1" 00:25:36.162 } 00:25:36.162 ] 00:25:36.162 } 00:25:36.162 ] 00:25:36.162 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.162 19:34:34 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:36.162 [2024-11-17 19:34:34.202572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:36.162 [2024-11-17 19:34:34.202610] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285896 ] 00:25:36.162 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.162 [2024-11-17 19:34:34.232634] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:36.162 [2024-11-17 19:34:34.232713] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:36.162 [2024-11-17 19:34:34.232725] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:36.162 [2024-11-17 19:34:34.232740] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:36.162 [2024-11-17 19:34:34.232753] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:36.162 [2024-11-17 19:34:34.236723] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:36.162 [2024-11-17 19:34:34.236782] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f4d2f0 0 00:25:36.162 [2024-11-17 19:34:34.241694] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:36.162 [2024-11-17 19:34:34.241713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:36.162 [2024-11-17 19:34:34.241722] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:36.162 [2024-11-17 19:34:34.241728] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:36.162 [2024-11-17 19:34:34.241789] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.162 [2024-11-17 19:34:34.241802] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.162 [2024-11-17 19:34:34.241809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.162 [2024-11-17 19:34:34.241826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:36.162 [2024-11-17 19:34:34.241852] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.162 [2024-11-17 19:34:34.251691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.162 [2024-11-17 19:34:34.251709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.162 [2024-11-17 19:34:34.251717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.251725] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa5ec0) on tqpair=0x1f4d2f0 00:25:36.163 [2024-11-17 19:34:34.251746] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:36.163 [2024-11-17 19:34:34.251758] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:36.163 [2024-11-17 19:34:34.251767] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:36.163 [2024-11-17 19:34:34.251786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.251795] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.251802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.163 [2024-11-17 19:34:34.251813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.163 [2024-11-17 19:34:34.251836] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.163 [2024-11-17 19:34:34.251985] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.163 [2024-11-17 19:34:34.252005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.163 [2024-11-17 19:34:34.252013] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa5ec0) on tqpair=0x1f4d2f0 00:25:36.163 [2024-11-17 19:34:34.252030] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:36.163 [2024-11-17 19:34:34.252042] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:36.163 [2024-11-17 19:34:34.252055] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252069] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.163 [2024-11-17 19:34:34.252079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.163 [2024-11-17 19:34:34.252100] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.163 [2024-11-17 19:34:34.252221] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.163 [2024-11-17 19:34:34.252235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.163 [2024-11-17 19:34:34.252242] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa5ec0) on tqpair=0x1f4d2f0 00:25:36.163 [2024-11-17 19:34:34.252258] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:36.163 [2024-11-17 19:34:34.252273] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:36.163 [2024-11-17 19:34:34.252285] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252299] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.163 [2024-11-17 19:34:34.252309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.163 [2024-11-17 19:34:34.252330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.163 [2024-11-17 19:34:34.252463] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.163 [2024-11-17 19:34:34.252474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.163 [2024-11-17 19:34:34.252481] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252488] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa5ec0) on tqpair=0x1f4d2f0 00:25:36.163 [2024-11-17 19:34:34.252498] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:36.163 [2024-11-17 19:34:34.252514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252522] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252529] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.163 [2024-11-17 19:34:34.252539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.163 [2024-11-17 19:34:34.252559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.163 [2024-11-17 19:34:34.252633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.163 [2024-11-17 19:34:34.252647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.163 [2024-11-17 19:34:34.252654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa5ec0) on tqpair=0x1f4d2f0 00:25:36.163 [2024-11-17 19:34:34.252681] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:36.163 [2024-11-17 19:34:34.252692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:36.163 [2024-11-17 19:34:34.252706] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:36.163 [2024-11-17 19:34:34.252815] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:36.163 [2024-11-17 19:34:34.252824] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:36.163 [2024-11-17 19:34:34.252837] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.252851] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.163 [2024-11-17 19:34:34.252862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.163 [2024-11-17 19:34:34.252882] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.163 [2024-11-17 19:34:34.253002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.163 [2024-11-17 19:34:34.253014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.163 [2024-11-17 19:34:34.253021] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.253028] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa5ec0) on tqpair=0x1f4d2f0 00:25:36.163 [2024-11-17 19:34:34.253037] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:36.163 [2024-11-17 19:34:34.253053] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.253061] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.253068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.163 [2024-11-17 19:34:34.253078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.163 [2024-11-17 19:34:34.253098] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.163 [2024-11-17 19:34:34.253186] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.163 [2024-11-17 19:34:34.253199] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.163 [2024-11-17 19:34:34.253206] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.253213] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa5ec0) on tqpair=0x1f4d2f0 00:25:36.163 [2024-11-17 19:34:34.253222] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:36.163 [2024-11-17 19:34:34.253230] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:36.163 [2024-11-17 19:34:34.253242] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:36.163 [2024-11-17 19:34:34.253257] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:36.163 [2024-11-17 19:34:34.253275] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.253282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.253289] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.163 [2024-11-17 19:34:34.253303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.163 [2024-11-17 19:34:34.253325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.163 [2024-11-17 19:34:34.253460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.163 [2024-11-17 19:34:34.253472] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.163 [2024-11-17 19:34:34.253479] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.253485] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4d2f0): datao=0, datal=4096, cccid=0 00:25:36.163 [2024-11-17 19:34:34.253493] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa5ec0) on tqpair(0x1f4d2f0): expected_datao=0, payload_size=4096 00:25:36.163 [2024-11-17 19:34:34.253510] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.253520] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.293778] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.163 [2024-11-17 19:34:34.293799] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.163 [2024-11-17 19:34:34.293807] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.163 [2024-11-17 19:34:34.293813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa5ec0) on tqpair=0x1f4d2f0 00:25:36.163 [2024-11-17 19:34:34.293836] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:36.163 [2024-11-17 19:34:34.293845] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:36.163 [2024-11-17 19:34:34.293853] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:36.163 [2024-11-17 19:34:34.293861] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:36.163 [2024-11-17 19:34:34.293868] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:36.163 [2024-11-17 19:34:34.293876] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:36.163 [2024-11-17 19:34:34.293896] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:36.164 [2024-11-17 19:34:34.293910] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.293918] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.293925] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.293936] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:36.164 [2024-11-17 19:34:34.293959] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.164 [2024-11-17 19:34:34.294041] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.164 [2024-11-17 19:34:34.294053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.164 [2024-11-17 19:34:34.294060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294067] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa5ec0) on tqpair=0x1f4d2f0 00:25:36.164 [2024-11-17 19:34:34.294080] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294093] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.294103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.164 [2024-11-17 19:34:34.294117] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294131] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.294139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.164 [2024-11-17 19:34:34.294149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294155] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294161] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.294169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.164 [2024-11-17 19:34:34.294179] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294191] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.294200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.164 [2024-11-17 19:34:34.294208] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:36.164 [2024-11-17 19:34:34.294227] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:36.164 [2024-11-17 19:34:34.294240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294247] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294253] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.294263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.164 [2024-11-17 19:34:34.294286] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa5ec0, cid 0, qid 0 00:25:36.164 [2024-11-17 19:34:34.294297] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa6020, cid 1, qid 0 00:25:36.164 [2024-11-17 19:34:34.294304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa6180, cid 2, qid 0 00:25:36.164 [2024-11-17 19:34:34.294312] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa62e0, cid 3, qid 0 00:25:36.164 [2024-11-17 19:34:34.294319] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa6440, cid 4, qid 0 00:25:36.164 [2024-11-17 19:34:34.294460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.164 [2024-11-17 19:34:34.294472] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.164 [2024-11-17 19:34:34.294479] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294485] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa6440) on tqpair=0x1f4d2f0 00:25:36.164 [2024-11-17 19:34:34.294495] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:36.164 [2024-11-17 19:34:34.294504] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:36.164 [2024-11-17 19:34:34.294520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.294546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.164 [2024-11-17 19:34:34.294571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa6440, cid 4, qid 0 00:25:36.164 [2024-11-17 19:34:34.294660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.164 [2024-11-17 19:34:34.294672] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.164 [2024-11-17 19:34:34.294689] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294696] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4d2f0): datao=0, datal=4096, cccid=4 00:25:36.164 [2024-11-17 19:34:34.294703] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa6440) on tqpair(0x1f4d2f0): expected_datao=0, payload_size=4096 00:25:36.164 [2024-11-17 19:34:34.294720] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294728] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294740] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.164 [2024-11-17 19:34:34.294750] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.164 [2024-11-17 19:34:34.294756] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294763] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa6440) on tqpair=0x1f4d2f0 00:25:36.164 [2024-11-17 19:34:34.294781] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:36.164 [2024-11-17 19:34:34.294816] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294826] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.294843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.164 [2024-11-17 19:34:34.294853] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.294866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.294875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.164 [2024-11-17 19:34:34.294900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa6440, cid 4, qid 0 00:25:36.164 [2024-11-17 19:34:34.294912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa65a0, cid 5, qid 0 00:25:36.164 [2024-11-17 19:34:34.295052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.164 [2024-11-17 19:34:34.295067] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.164 [2024-11-17 19:34:34.295074] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.295080] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4d2f0): datao=0, datal=1024, cccid=4 00:25:36.164 [2024-11-17 19:34:34.295087] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa6440) on tqpair(0x1f4d2f0): expected_datao=0, payload_size=1024 00:25:36.164 [2024-11-17 19:34:34.295098] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.295105] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.295113] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.164 [2024-11-17 19:34:34.295122] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.164 [2024-11-17 19:34:34.295128] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.295135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa65a0) on tqpair=0x1f4d2f0 00:25:36.164 [2024-11-17 19:34:34.339686] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.164 [2024-11-17 19:34:34.339705] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.164 [2024-11-17 19:34:34.339722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.339730] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa6440) on tqpair=0x1f4d2f0 00:25:36.164 [2024-11-17 19:34:34.339749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.339758] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.339764] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4d2f0) 00:25:36.164 [2024-11-17 19:34:34.339775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.164 [2024-11-17 19:34:34.339806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa6440, cid 4, qid 0 00:25:36.164 [2024-11-17 19:34:34.339934] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.164 [2024-11-17 19:34:34.339947] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.164 [2024-11-17 19:34:34.339954] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.339960] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4d2f0): datao=0, datal=3072, cccid=4 00:25:36.164 [2024-11-17 19:34:34.339967] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa6440) on tqpair(0x1f4d2f0): expected_datao=0, payload_size=3072 00:25:36.164 [2024-11-17 19:34:34.339987] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.339996] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.380778] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.164 [2024-11-17 19:34:34.380797] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.164 [2024-11-17 19:34:34.380804] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.380811] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa6440) on tqpair=0x1f4d2f0 00:25:36.164 [2024-11-17 19:34:34.380828] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.164 [2024-11-17 19:34:34.380836] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.165 [2024-11-17 19:34:34.380843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4d2f0) 00:25:36.165 [2024-11-17 19:34:34.380853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.165 [2024-11-17 19:34:34.380882] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa6440, cid 4, qid 0 00:25:36.165 [2024-11-17 19:34:34.380983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.165 [2024-11-17 19:34:34.380995] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.165 [2024-11-17 19:34:34.381002] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.165 [2024-11-17 19:34:34.381008] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4d2f0): datao=0, datal=8, cccid=4 00:25:36.165 [2024-11-17 19:34:34.381015] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa6440) on tqpair(0x1f4d2f0): expected_datao=0, payload_size=8 00:25:36.165 [2024-11-17 19:34:34.381026] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.165 [2024-11-17 19:34:34.381034] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.165 [2024-11-17 19:34:34.422778] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.165 [2024-11-17 19:34:34.422797] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.165 [2024-11-17 19:34:34.422805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.165 [2024-11-17 19:34:34.422812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa6440) on tqpair=0x1f4d2f0 00:25:36.165 ===================================================== 00:25:36.165 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:36.165 ===================================================== 00:25:36.165 Controller Capabilities/Features 00:25:36.165 ================================ 00:25:36.165 Vendor ID: 0000 00:25:36.165 Subsystem Vendor ID: 0000 00:25:36.165 Serial Number: .................... 00:25:36.165 Model Number: ........................................ 00:25:36.165 Firmware Version: 24.01.1 00:25:36.165 Recommended Arb Burst: 0 00:25:36.165 IEEE OUI Identifier: 00 00 00 00:25:36.165 Multi-path I/O 00:25:36.165 May have multiple subsystem ports: No 00:25:36.165 May have multiple controllers: No 00:25:36.165 Associated with SR-IOV VF: No 00:25:36.165 Max Data Transfer Size: 131072 00:25:36.165 Max Number of Namespaces: 0 00:25:36.165 Max Number of I/O Queues: 1024 00:25:36.165 NVMe Specification Version (VS): 1.3 00:25:36.165 NVMe Specification Version (Identify): 1.3 00:25:36.165 Maximum Queue Entries: 128 00:25:36.165 Contiguous Queues Required: Yes 00:25:36.165 Arbitration Mechanisms Supported 00:25:36.165 Weighted Round Robin: Not Supported 00:25:36.165 Vendor Specific: Not Supported 00:25:36.165 Reset Timeout: 15000 ms 00:25:36.165 Doorbell Stride: 4 bytes 00:25:36.165 NVM Subsystem Reset: Not Supported 00:25:36.165 Command Sets Supported 00:25:36.165 NVM Command Set: Supported 00:25:36.165 Boot Partition: Not Supported 00:25:36.165 Memory Page Size Minimum: 4096 bytes 00:25:36.165 Memory Page Size Maximum: 4096 bytes 00:25:36.165 Persistent Memory Region: Not Supported 00:25:36.165 Optional Asynchronous Events Supported 00:25:36.165 Namespace Attribute Notices: Not Supported 00:25:36.165 Firmware Activation Notices: Not Supported 00:25:36.165 ANA Change Notices: Not Supported 00:25:36.165 PLE Aggregate Log Change Notices: Not Supported 00:25:36.165 LBA Status Info Alert Notices: Not Supported 00:25:36.165 EGE Aggregate Log Change Notices: Not Supported 00:25:36.165 Normal NVM Subsystem Shutdown event: Not Supported 00:25:36.165 Zone Descriptor Change Notices: Not Supported 00:25:36.165 Discovery Log Change Notices: Supported 00:25:36.165 Controller Attributes 00:25:36.165 128-bit Host Identifier: Not Supported 00:25:36.165 Non-Operational Permissive Mode: Not Supported 00:25:36.165 NVM Sets: Not Supported 00:25:36.165 Read Recovery Levels: Not Supported 00:25:36.165 Endurance Groups: Not Supported 00:25:36.165 Predictable Latency Mode: Not Supported 00:25:36.165 Traffic Based Keep ALive: Not Supported 00:25:36.165 Namespace Granularity: Not Supported 00:25:36.165 SQ Associations: Not Supported 00:25:36.165 UUID List: Not Supported 00:25:36.165 Multi-Domain Subsystem: Not Supported 00:25:36.165 Fixed Capacity Management: Not Supported 00:25:36.165 Variable Capacity Management: Not Supported 00:25:36.165 Delete Endurance Group: Not Supported 00:25:36.165 Delete NVM Set: Not Supported 00:25:36.165 Extended LBA Formats Supported: Not Supported 00:25:36.165 Flexible Data Placement Supported: Not Supported 00:25:36.165 00:25:36.165 Controller Memory Buffer Support 00:25:36.165 ================================ 00:25:36.165 Supported: No 00:25:36.165 00:25:36.165 Persistent Memory Region Support 00:25:36.165 ================================ 00:25:36.165 Supported: No 00:25:36.165 00:25:36.165 Admin Command Set Attributes 00:25:36.165 ============================ 00:25:36.165 Security Send/Receive: Not Supported 00:25:36.165 Format NVM: Not Supported 00:25:36.165 Firmware Activate/Download: Not Supported 00:25:36.165 Namespace Management: Not Supported 00:25:36.165 Device Self-Test: Not Supported 00:25:36.165 Directives: Not Supported 00:25:36.165 NVMe-MI: Not Supported 00:25:36.165 Virtualization Management: Not Supported 00:25:36.165 Doorbell Buffer Config: Not Supported 00:25:36.165 Get LBA Status Capability: Not Supported 00:25:36.165 Command & Feature Lockdown Capability: Not Supported 00:25:36.165 Abort Command Limit: 1 00:25:36.165 Async Event Request Limit: 4 00:25:36.165 Number of Firmware Slots: N/A 00:25:36.165 Firmware Slot 1 Read-Only: N/A 00:25:36.165 Firmware Activation Without Reset: N/A 00:25:36.165 Multiple Update Detection Support: N/A 00:25:36.165 Firmware Update Granularity: No Information Provided 00:25:36.165 Per-Namespace SMART Log: No 00:25:36.165 Asymmetric Namespace Access Log Page: Not Supported 00:25:36.165 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:36.165 Command Effects Log Page: Not Supported 00:25:36.165 Get Log Page Extended Data: Supported 00:25:36.165 Telemetry Log Pages: Not Supported 00:25:36.165 Persistent Event Log Pages: Not Supported 00:25:36.165 Supported Log Pages Log Page: May Support 00:25:36.165 Commands Supported & Effects Log Page: Not Supported 00:25:36.165 Feature Identifiers & Effects Log Page:May Support 00:25:36.165 NVMe-MI Commands & Effects Log Page: May Support 00:25:36.165 Data Area 4 for Telemetry Log: Not Supported 00:25:36.165 Error Log Page Entries Supported: 128 00:25:36.165 Keep Alive: Not Supported 00:25:36.165 00:25:36.165 NVM Command Set Attributes 00:25:36.165 ========================== 00:25:36.165 Submission Queue Entry Size 00:25:36.165 Max: 1 00:25:36.165 Min: 1 00:25:36.165 Completion Queue Entry Size 00:25:36.165 Max: 1 00:25:36.165 Min: 1 00:25:36.165 Number of Namespaces: 0 00:25:36.165 Compare Command: Not Supported 00:25:36.165 Write Uncorrectable Command: Not Supported 00:25:36.165 Dataset Management Command: Not Supported 00:25:36.165 Write Zeroes Command: Not Supported 00:25:36.165 Set Features Save Field: Not Supported 00:25:36.165 Reservations: Not Supported 00:25:36.165 Timestamp: Not Supported 00:25:36.165 Copy: Not Supported 00:25:36.165 Volatile Write Cache: Not Present 00:25:36.165 Atomic Write Unit (Normal): 1 00:25:36.165 Atomic Write Unit (PFail): 1 00:25:36.165 Atomic Compare & Write Unit: 1 00:25:36.165 Fused Compare & Write: Supported 00:25:36.165 Scatter-Gather List 00:25:36.165 SGL Command Set: Supported 00:25:36.165 SGL Keyed: Supported 00:25:36.165 SGL Bit Bucket Descriptor: Not Supported 00:25:36.165 SGL Metadata Pointer: Not Supported 00:25:36.165 Oversized SGL: Not Supported 00:25:36.165 SGL Metadata Address: Not Supported 00:25:36.165 SGL Offset: Supported 00:25:36.165 Transport SGL Data Block: Not Supported 00:25:36.165 Replay Protected Memory Block: Not Supported 00:25:36.165 00:25:36.165 Firmware Slot Information 00:25:36.165 ========================= 00:25:36.165 Active slot: 0 00:25:36.165 00:25:36.165 00:25:36.165 Error Log 00:25:36.165 ========= 00:25:36.165 00:25:36.165 Active Namespaces 00:25:36.165 ================= 00:25:36.165 Discovery Log Page 00:25:36.165 ================== 00:25:36.165 Generation Counter: 2 00:25:36.165 Number of Records: 2 00:25:36.165 Record Format: 0 00:25:36.165 00:25:36.165 Discovery Log Entry 0 00:25:36.165 ---------------------- 00:25:36.165 Transport Type: 3 (TCP) 00:25:36.165 Address Family: 1 (IPv4) 00:25:36.165 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:36.165 Entry Flags: 00:25:36.165 Duplicate Returned Information: 1 00:25:36.165 Explicit Persistent Connection Support for Discovery: 1 00:25:36.165 Transport Requirements: 00:25:36.165 Secure Channel: Not Required 00:25:36.165 Port ID: 0 (0x0000) 00:25:36.165 Controller ID: 65535 (0xffff) 00:25:36.165 Admin Max SQ Size: 128 00:25:36.165 Transport Service Identifier: 4420 00:25:36.165 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:36.166 Transport Address: 10.0.0.2 00:25:36.166 Discovery Log Entry 1 00:25:36.166 ---------------------- 00:25:36.166 Transport Type: 3 (TCP) 00:25:36.166 Address Family: 1 (IPv4) 00:25:36.166 Subsystem Type: 2 (NVM Subsystem) 00:25:36.166 Entry Flags: 00:25:36.166 Duplicate Returned Information: 0 00:25:36.166 Explicit Persistent Connection Support for Discovery: 0 00:25:36.166 Transport Requirements: 00:25:36.166 Secure Channel: Not Required 00:25:36.166 Port ID: 0 (0x0000) 00:25:36.166 Controller ID: 65535 (0xffff) 00:25:36.166 Admin Max SQ Size: 128 00:25:36.166 Transport Service Identifier: 4420 00:25:36.166 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:36.166 Transport Address: 10.0.0.2 [2024-11-17 19:34:34.422927] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:36.166 [2024-11-17 19:34:34.422952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.166 [2024-11-17 19:34:34.422969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.166 [2024-11-17 19:34:34.422979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.166 [2024-11-17 19:34:34.422988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.166 [2024-11-17 19:34:34.423001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423016] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4d2f0) 00:25:36.166 [2024-11-17 19:34:34.423027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.166 [2024-11-17 19:34:34.423052] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa62e0, cid 3, qid 0 00:25:36.166 [2024-11-17 19:34:34.423159] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.166 [2024-11-17 19:34:34.423171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.166 [2024-11-17 19:34:34.423179] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423185] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa62e0) on tqpair=0x1f4d2f0 00:25:36.166 [2024-11-17 19:34:34.423198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423212] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4d2f0) 00:25:36.166 [2024-11-17 19:34:34.423222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.166 [2024-11-17 19:34:34.423247] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa62e0, cid 3, qid 0 00:25:36.166 [2024-11-17 19:34:34.423372] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.166 [2024-11-17 19:34:34.423384] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.166 [2024-11-17 19:34:34.423391] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423397] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa62e0) on tqpair=0x1f4d2f0 00:25:36.166 [2024-11-17 19:34:34.423406] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:36.166 [2024-11-17 19:34:34.423414] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:36.166 [2024-11-17 19:34:34.423429] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423437] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4d2f0) 00:25:36.166 [2024-11-17 19:34:34.423454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.166 [2024-11-17 19:34:34.423473] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa62e0, cid 3, qid 0 00:25:36.166 [2024-11-17 19:34:34.423569] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.166 [2024-11-17 19:34:34.423583] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.166 [2024-11-17 19:34:34.423590] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423596] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa62e0) on tqpair=0x1f4d2f0 00:25:36.166 [2024-11-17 19:34:34.423613] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423622] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.166 [2024-11-17 19:34:34.423628] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4d2f0) 00:25:36.166 [2024-11-17 19:34:34.423642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.166 [2024-11-17 19:34:34.423664] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa62e0, cid 3, qid 0 00:25:36.427 [2024-11-17 19:34:34.427690] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.427 [2024-11-17 19:34:34.427706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.427 [2024-11-17 19:34:34.427713] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.427720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa62e0) on tqpair=0x1f4d2f0 00:25:36.427 [2024-11-17 19:34:34.427739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.427764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.427771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4d2f0) 00:25:36.427 [2024-11-17 19:34:34.427781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.427 [2024-11-17 19:34:34.427804] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa62e0, cid 3, qid 0 00:25:36.427 [2024-11-17 19:34:34.427920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.427 [2024-11-17 19:34:34.427933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.427 [2024-11-17 19:34:34.427940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.427948] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa62e0) on tqpair=0x1f4d2f0 00:25:36.427 [2024-11-17 19:34:34.427962] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:25:36.427 00:25:36.427 19:34:34 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:36.427 [2024-11-17 19:34:34.458454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:36.427 [2024-11-17 19:34:34.458488] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285906 ] 00:25:36.427 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.427 [2024-11-17 19:34:34.489527] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:36.427 [2024-11-17 19:34:34.489571] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:36.427 [2024-11-17 19:34:34.489581] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:36.427 [2024-11-17 19:34:34.489596] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:36.427 [2024-11-17 19:34:34.489608] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:36.427 [2024-11-17 19:34:34.492719] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:36.427 [2024-11-17 19:34:34.492771] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f642f0 0 00:25:36.427 [2024-11-17 19:34:34.499685] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:36.427 [2024-11-17 19:34:34.499705] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:36.427 [2024-11-17 19:34:34.499713] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:36.427 [2024-11-17 19:34:34.499719] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:36.427 [2024-11-17 19:34:34.499756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.499772] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.499780] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.427 [2024-11-17 19:34:34.499793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:36.427 [2024-11-17 19:34:34.499818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.427 [2024-11-17 19:34:34.506687] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.427 [2024-11-17 19:34:34.506704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.427 [2024-11-17 19:34:34.506712] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.506719] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbcec0) on tqpair=0x1f642f0 00:25:36.427 [2024-11-17 19:34:34.506734] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:36.427 [2024-11-17 19:34:34.506744] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:36.427 [2024-11-17 19:34:34.506753] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:36.427 [2024-11-17 19:34:34.506769] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.506778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.506784] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.427 [2024-11-17 19:34:34.506795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.427 [2024-11-17 19:34:34.506818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.427 [2024-11-17 19:34:34.506951] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.427 [2024-11-17 19:34:34.506966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.427 [2024-11-17 19:34:34.506973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.506979] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbcec0) on tqpair=0x1f642f0 00:25:36.427 [2024-11-17 19:34:34.506988] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:36.427 [2024-11-17 19:34:34.507001] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:36.427 [2024-11-17 19:34:34.507014] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.507022] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.427 [2024-11-17 19:34:34.507029] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.428 [2024-11-17 19:34:34.507039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.428 [2024-11-17 19:34:34.507060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.428 [2024-11-17 19:34:34.507149] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.428 [2024-11-17 19:34:34.507162] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.428 [2024-11-17 19:34:34.507169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507176] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbcec0) on tqpair=0x1f642f0 00:25:36.428 [2024-11-17 19:34:34.507185] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:36.428 [2024-11-17 19:34:34.507198] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:36.428 [2024-11-17 19:34:34.507211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507230] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.428 [2024-11-17 19:34:34.507240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.428 [2024-11-17 19:34:34.507261] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.428 [2024-11-17 19:34:34.507351] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.428 [2024-11-17 19:34:34.507364] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.428 [2024-11-17 19:34:34.507371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbcec0) on tqpair=0x1f642f0 00:25:36.428 [2024-11-17 19:34:34.507386] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:36.428 [2024-11-17 19:34:34.507403] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507418] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.428 [2024-11-17 19:34:34.507429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.428 [2024-11-17 19:34:34.507449] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.428 [2024-11-17 19:34:34.507540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.428 [2024-11-17 19:34:34.507552] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.428 [2024-11-17 19:34:34.507559] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507565] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbcec0) on tqpair=0x1f642f0 00:25:36.428 [2024-11-17 19:34:34.507573] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:36.428 [2024-11-17 19:34:34.507582] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:36.428 [2024-11-17 19:34:34.507594] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:36.428 [2024-11-17 19:34:34.507705] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:36.428 [2024-11-17 19:34:34.507714] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:36.428 [2024-11-17 19:34:34.507726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507734] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.428 [2024-11-17 19:34:34.507750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.428 [2024-11-17 19:34:34.507771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.428 [2024-11-17 19:34:34.507883] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.428 [2024-11-17 19:34:34.507897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.428 [2024-11-17 19:34:34.507904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbcec0) on tqpair=0x1f642f0 00:25:36.428 [2024-11-17 19:34:34.507919] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:36.428 [2024-11-17 19:34:34.507935] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507948] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.507956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.428 [2024-11-17 19:34:34.507966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.428 [2024-11-17 19:34:34.507987] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.428 [2024-11-17 19:34:34.508069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.428 [2024-11-17 19:34:34.508082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.428 [2024-11-17 19:34:34.508089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.508096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbcec0) on tqpair=0x1f642f0 00:25:36.428 [2024-11-17 19:34:34.508104] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:36.428 [2024-11-17 19:34:34.508112] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:36.428 [2024-11-17 19:34:34.508125] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:36.428 [2024-11-17 19:34:34.508140] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:36.428 [2024-11-17 19:34:34.508154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.508162] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.508168] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.428 [2024-11-17 19:34:34.508179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.428 [2024-11-17 19:34:34.508199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.428 [2024-11-17 19:34:34.508336] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.428 [2024-11-17 19:34:34.508349] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.428 [2024-11-17 19:34:34.508355] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.508362] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f642f0): datao=0, datal=4096, cccid=0 00:25:36.428 [2024-11-17 19:34:34.508369] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fbcec0) on tqpair(0x1f642f0): expected_datao=0, payload_size=4096 00:25:36.428 [2024-11-17 19:34:34.508386] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.508395] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.550684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.428 [2024-11-17 19:34:34.550702] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.428 [2024-11-17 19:34:34.550710] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.550716] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbcec0) on tqpair=0x1f642f0 00:25:36.428 [2024-11-17 19:34:34.550728] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:36.428 [2024-11-17 19:34:34.550736] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:36.428 [2024-11-17 19:34:34.550744] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:36.428 [2024-11-17 19:34:34.550750] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:36.428 [2024-11-17 19:34:34.550758] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:36.428 [2024-11-17 19:34:34.550770] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:36.428 [2024-11-17 19:34:34.550790] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:36.428 [2024-11-17 19:34:34.550804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.550811] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.550818] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.428 [2024-11-17 19:34:34.550829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:36.428 [2024-11-17 19:34:34.550851] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.428 [2024-11-17 19:34:34.550991] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.428 [2024-11-17 19:34:34.551005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.428 [2024-11-17 19:34:34.551011] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.551018] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbcec0) on tqpair=0x1f642f0 00:25:36.428 [2024-11-17 19:34:34.551044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.551052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.551058] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f642f0) 00:25:36.428 [2024-11-17 19:34:34.551068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.428 [2024-11-17 19:34:34.551078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.551085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.551091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f642f0) 00:25:36.428 [2024-11-17 19:34:34.551100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.428 [2024-11-17 19:34:34.551109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.551116] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.428 [2024-11-17 19:34:34.551122] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f642f0) 00:25:36.429 [2024-11-17 19:34:34.551130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.429 [2024-11-17 19:34:34.551140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551146] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.429 [2024-11-17 19:34:34.551161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.429 [2024-11-17 19:34:34.551185] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.551204] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.551216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551229] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f642f0) 00:25:36.429 [2024-11-17 19:34:34.551239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.429 [2024-11-17 19:34:34.551265] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbcec0, cid 0, qid 0 00:25:36.429 [2024-11-17 19:34:34.551292] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd020, cid 1, qid 0 00:25:36.429 [2024-11-17 19:34:34.551300] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd180, cid 2, qid 0 00:25:36.429 [2024-11-17 19:34:34.551308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.429 [2024-11-17 19:34:34.551315] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd440, cid 4, qid 0 00:25:36.429 [2024-11-17 19:34:34.551488] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.429 [2024-11-17 19:34:34.551501] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.429 [2024-11-17 19:34:34.551508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551514] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd440) on tqpair=0x1f642f0 00:25:36.429 [2024-11-17 19:34:34.551523] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:36.429 [2024-11-17 19:34:34.551532] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.551545] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.551561] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.551573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551603] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f642f0) 00:25:36.429 [2024-11-17 19:34:34.551613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:36.429 [2024-11-17 19:34:34.551634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd440, cid 4, qid 0 00:25:36.429 [2024-11-17 19:34:34.551780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.429 [2024-11-17 19:34:34.551795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.429 [2024-11-17 19:34:34.551802] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551809] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd440) on tqpair=0x1f642f0 00:25:36.429 [2024-11-17 19:34:34.551874] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.551891] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.551905] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551913] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.551920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f642f0) 00:25:36.429 [2024-11-17 19:34:34.551930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.429 [2024-11-17 19:34:34.551951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd440, cid 4, qid 0 00:25:36.429 [2024-11-17 19:34:34.552111] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.429 [2024-11-17 19:34:34.552126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.429 [2024-11-17 19:34:34.552133] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.552139] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f642f0): datao=0, datal=4096, cccid=4 00:25:36.429 [2024-11-17 19:34:34.552151] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fbd440) on tqpair(0x1f642f0): expected_datao=0, payload_size=4096 00:25:36.429 [2024-11-17 19:34:34.552170] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.552179] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.592809] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.429 [2024-11-17 19:34:34.592827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.429 [2024-11-17 19:34:34.592835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.592842] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd440) on tqpair=0x1f642f0 00:25:36.429 [2024-11-17 19:34:34.592863] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:36.429 [2024-11-17 19:34:34.592884] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.592901] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.592915] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.592924] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.592930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f642f0) 00:25:36.429 [2024-11-17 19:34:34.592941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.429 [2024-11-17 19:34:34.592963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd440, cid 4, qid 0 00:25:36.429 [2024-11-17 19:34:34.593076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.429 [2024-11-17 19:34:34.593090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.429 [2024-11-17 19:34:34.593097] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.593104] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f642f0): datao=0, datal=4096, cccid=4 00:25:36.429 [2024-11-17 19:34:34.593111] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fbd440) on tqpair(0x1f642f0): expected_datao=0, payload_size=4096 00:25:36.429 [2024-11-17 19:34:34.593129] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.593138] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.633846] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.429 [2024-11-17 19:34:34.633865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.429 [2024-11-17 19:34:34.633872] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.633879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd440) on tqpair=0x1f642f0 00:25:36.429 [2024-11-17 19:34:34.633904] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.633924] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.633939] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.633947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.633953] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f642f0) 00:25:36.429 [2024-11-17 19:34:34.633964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.429 [2024-11-17 19:34:34.633987] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd440, cid 4, qid 0 00:25:36.429 [2024-11-17 19:34:34.634087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.429 [2024-11-17 19:34:34.634106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.429 [2024-11-17 19:34:34.634114] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.634120] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f642f0): datao=0, datal=4096, cccid=4 00:25:36.429 [2024-11-17 19:34:34.634128] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fbd440) on tqpair(0x1f642f0): expected_datao=0, payload_size=4096 00:25:36.429 [2024-11-17 19:34:34.634139] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.634147] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.634159] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.429 [2024-11-17 19:34:34.634168] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.429 [2024-11-17 19:34:34.634175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.429 [2024-11-17 19:34:34.634182] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd440) on tqpair=0x1f642f0 00:25:36.429 [2024-11-17 19:34:34.634195] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.634210] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.634226] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.634237] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.634246] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.634254] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:36.429 [2024-11-17 19:34:34.634262] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:36.429 [2024-11-17 19:34:34.634270] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:36.429 [2024-11-17 19:34:34.634289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.634298] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.634304] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f642f0) 00:25:36.430 [2024-11-17 19:34:34.634314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.430 [2024-11-17 19:34:34.634325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.634333] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.634339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f642f0) 00:25:36.430 [2024-11-17 19:34:34.634348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.430 [2024-11-17 19:34:34.634388] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd440, cid 4, qid 0 00:25:36.430 [2024-11-17 19:34:34.634400] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd5a0, cid 5, qid 0 00:25:36.430 [2024-11-17 19:34:34.634537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.430 [2024-11-17 19:34:34.634551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.430 [2024-11-17 19:34:34.634558] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.634564] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd440) on tqpair=0x1f642f0 00:25:36.430 [2024-11-17 19:34:34.634575] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.430 [2024-11-17 19:34:34.634588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.430 [2024-11-17 19:34:34.634596] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.634602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd5a0) on tqpair=0x1f642f0 00:25:36.430 [2024-11-17 19:34:34.634619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.634628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.634635] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f642f0) 00:25:36.430 [2024-11-17 19:34:34.634645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.430 [2024-11-17 19:34:34.634665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd5a0, cid 5, qid 0 00:25:36.430 [2024-11-17 19:34:34.638692] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.430 [2024-11-17 19:34:34.638706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.430 [2024-11-17 19:34:34.638713] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.638720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd5a0) on tqpair=0x1f642f0 00:25:36.430 [2024-11-17 19:34:34.638738] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.638747] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.638753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f642f0) 00:25:36.430 [2024-11-17 19:34:34.638764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.430 [2024-11-17 19:34:34.638785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd5a0, cid 5, qid 0 00:25:36.430 [2024-11-17 19:34:34.638917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.430 [2024-11-17 19:34:34.638931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.430 [2024-11-17 19:34:34.638938] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.638944] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd5a0) on tqpair=0x1f642f0 00:25:36.430 [2024-11-17 19:34:34.638960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.638970] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.638976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f642f0) 00:25:36.430 [2024-11-17 19:34:34.638986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.430 [2024-11-17 19:34:34.639007] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd5a0, cid 5, qid 0 00:25:36.430 [2024-11-17 19:34:34.639091] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.430 [2024-11-17 19:34:34.639105] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.430 [2024-11-17 19:34:34.639112] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639118] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd5a0) on tqpair=0x1f642f0 00:25:36.430 [2024-11-17 19:34:34.639138] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639148] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f642f0) 00:25:36.430 [2024-11-17 19:34:34.639164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.430 [2024-11-17 19:34:34.639176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639184] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639194] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f642f0) 00:25:36.430 [2024-11-17 19:34:34.639204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.430 [2024-11-17 19:34:34.639215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639229] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f642f0) 00:25:36.430 [2024-11-17 19:34:34.639238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.430 [2024-11-17 19:34:34.639249] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639257] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639263] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f642f0) 00:25:36.430 [2024-11-17 19:34:34.639272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.430 [2024-11-17 19:34:34.639309] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd5a0, cid 5, qid 0 00:25:36.430 [2024-11-17 19:34:34.639321] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd440, cid 4, qid 0 00:25:36.430 [2024-11-17 19:34:34.639328] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd700, cid 6, qid 0 00:25:36.430 [2024-11-17 19:34:34.639335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd860, cid 7, qid 0 00:25:36.430 [2024-11-17 19:34:34.639562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.430 [2024-11-17 19:34:34.639577] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.430 [2024-11-17 19:34:34.639584] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639590] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f642f0): datao=0, datal=8192, cccid=5 00:25:36.430 [2024-11-17 19:34:34.639598] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fbd5a0) on tqpair(0x1f642f0): expected_datao=0, payload_size=8192 00:25:36.430 [2024-11-17 19:34:34.639617] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639627] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.430 [2024-11-17 19:34:34.639649] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.430 [2024-11-17 19:34:34.639656] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639662] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f642f0): datao=0, datal=512, cccid=4 00:25:36.430 [2024-11-17 19:34:34.639669] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fbd440) on tqpair(0x1f642f0): expected_datao=0, payload_size=512 00:25:36.430 [2024-11-17 19:34:34.639687] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639695] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639703] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.430 [2024-11-17 19:34:34.639712] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.430 [2024-11-17 19:34:34.639718] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639724] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f642f0): datao=0, datal=512, cccid=6 00:25:36.430 [2024-11-17 19:34:34.639731] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fbd700) on tqpair(0x1f642f0): expected_datao=0, payload_size=512 00:25:36.430 [2024-11-17 19:34:34.639741] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639748] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:36.430 [2024-11-17 19:34:34.639770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:36.430 [2024-11-17 19:34:34.639777] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639783] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f642f0): datao=0, datal=4096, cccid=7 00:25:36.430 [2024-11-17 19:34:34.639790] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fbd860) on tqpair(0x1f642f0): expected_datao=0, payload_size=4096 00:25:36.430 [2024-11-17 19:34:34.639800] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639807] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639819] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.430 [2024-11-17 19:34:34.639829] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.430 [2024-11-17 19:34:34.639835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639841] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd5a0) on tqpair=0x1f642f0 00:25:36.430 [2024-11-17 19:34:34.639862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.430 [2024-11-17 19:34:34.639874] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.430 [2024-11-17 19:34:34.639880] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639886] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd440) on tqpair=0x1f642f0 00:25:36.430 [2024-11-17 19:34:34.639901] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.430 [2024-11-17 19:34:34.639912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.430 [2024-11-17 19:34:34.639919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.430 [2024-11-17 19:34:34.639925] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd700) on tqpair=0x1f642f0 00:25:36.431 [2024-11-17 19:34:34.639936] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.431 [2024-11-17 19:34:34.639946] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.431 [2024-11-17 19:34:34.639968] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.431 [2024-11-17 19:34:34.639974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd860) on tqpair=0x1f642f0 00:25:36.431 ===================================================== 00:25:36.431 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:36.431 ===================================================== 00:25:36.431 Controller Capabilities/Features 00:25:36.431 ================================ 00:25:36.431 Vendor ID: 8086 00:25:36.431 Subsystem Vendor ID: 8086 00:25:36.431 Serial Number: SPDK00000000000001 00:25:36.431 Model Number: SPDK bdev Controller 00:25:36.431 Firmware Version: 24.01.1 00:25:36.431 Recommended Arb Burst: 6 00:25:36.431 IEEE OUI Identifier: e4 d2 5c 00:25:36.431 Multi-path I/O 00:25:36.431 May have multiple subsystem ports: Yes 00:25:36.431 May have multiple controllers: Yes 00:25:36.431 Associated with SR-IOV VF: No 00:25:36.431 Max Data Transfer Size: 131072 00:25:36.431 Max Number of Namespaces: 32 00:25:36.431 Max Number of I/O Queues: 127 00:25:36.431 NVMe Specification Version (VS): 1.3 00:25:36.431 NVMe Specification Version (Identify): 1.3 00:25:36.431 Maximum Queue Entries: 128 00:25:36.431 Contiguous Queues Required: Yes 00:25:36.431 Arbitration Mechanisms Supported 00:25:36.431 Weighted Round Robin: Not Supported 00:25:36.431 Vendor Specific: Not Supported 00:25:36.431 Reset Timeout: 15000 ms 00:25:36.431 Doorbell Stride: 4 bytes 00:25:36.431 NVM Subsystem Reset: Not Supported 00:25:36.431 Command Sets Supported 00:25:36.431 NVM Command Set: Supported 00:25:36.431 Boot Partition: Not Supported 00:25:36.431 Memory Page Size Minimum: 4096 bytes 00:25:36.431 Memory Page Size Maximum: 4096 bytes 00:25:36.431 Persistent Memory Region: Not Supported 00:25:36.431 Optional Asynchronous Events Supported 00:25:36.431 Namespace Attribute Notices: Supported 00:25:36.431 Firmware Activation Notices: Not Supported 00:25:36.431 ANA Change Notices: Not Supported 00:25:36.431 PLE Aggregate Log Change Notices: Not Supported 00:25:36.431 LBA Status Info Alert Notices: Not Supported 00:25:36.431 EGE Aggregate Log Change Notices: Not Supported 00:25:36.431 Normal NVM Subsystem Shutdown event: Not Supported 00:25:36.431 Zone Descriptor Change Notices: Not Supported 00:25:36.431 Discovery Log Change Notices: Not Supported 00:25:36.431 Controller Attributes 00:25:36.431 128-bit Host Identifier: Supported 00:25:36.431 Non-Operational Permissive Mode: Not Supported 00:25:36.431 NVM Sets: Not Supported 00:25:36.431 Read Recovery Levels: Not Supported 00:25:36.431 Endurance Groups: Not Supported 00:25:36.431 Predictable Latency Mode: Not Supported 00:25:36.431 Traffic Based Keep ALive: Not Supported 00:25:36.431 Namespace Granularity: Not Supported 00:25:36.431 SQ Associations: Not Supported 00:25:36.431 UUID List: Not Supported 00:25:36.431 Multi-Domain Subsystem: Not Supported 00:25:36.431 Fixed Capacity Management: Not Supported 00:25:36.431 Variable Capacity Management: Not Supported 00:25:36.431 Delete Endurance Group: Not Supported 00:25:36.431 Delete NVM Set: Not Supported 00:25:36.431 Extended LBA Formats Supported: Not Supported 00:25:36.431 Flexible Data Placement Supported: Not Supported 00:25:36.431 00:25:36.431 Controller Memory Buffer Support 00:25:36.431 ================================ 00:25:36.431 Supported: No 00:25:36.431 00:25:36.431 Persistent Memory Region Support 00:25:36.431 ================================ 00:25:36.431 Supported: No 00:25:36.431 00:25:36.431 Admin Command Set Attributes 00:25:36.431 ============================ 00:25:36.431 Security Send/Receive: Not Supported 00:25:36.431 Format NVM: Not Supported 00:25:36.431 Firmware Activate/Download: Not Supported 00:25:36.431 Namespace Management: Not Supported 00:25:36.431 Device Self-Test: Not Supported 00:25:36.431 Directives: Not Supported 00:25:36.431 NVMe-MI: Not Supported 00:25:36.431 Virtualization Management: Not Supported 00:25:36.431 Doorbell Buffer Config: Not Supported 00:25:36.431 Get LBA Status Capability: Not Supported 00:25:36.431 Command & Feature Lockdown Capability: Not Supported 00:25:36.431 Abort Command Limit: 4 00:25:36.431 Async Event Request Limit: 4 00:25:36.431 Number of Firmware Slots: N/A 00:25:36.431 Firmware Slot 1 Read-Only: N/A 00:25:36.431 Firmware Activation Without Reset: N/A 00:25:36.431 Multiple Update Detection Support: N/A 00:25:36.431 Firmware Update Granularity: No Information Provided 00:25:36.431 Per-Namespace SMART Log: No 00:25:36.431 Asymmetric Namespace Access Log Page: Not Supported 00:25:36.431 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:36.431 Command Effects Log Page: Supported 00:25:36.431 Get Log Page Extended Data: Supported 00:25:36.431 Telemetry Log Pages: Not Supported 00:25:36.431 Persistent Event Log Pages: Not Supported 00:25:36.431 Supported Log Pages Log Page: May Support 00:25:36.431 Commands Supported & Effects Log Page: Not Supported 00:25:36.431 Feature Identifiers & Effects Log Page:May Support 00:25:36.431 NVMe-MI Commands & Effects Log Page: May Support 00:25:36.431 Data Area 4 for Telemetry Log: Not Supported 00:25:36.431 Error Log Page Entries Supported: 128 00:25:36.431 Keep Alive: Supported 00:25:36.431 Keep Alive Granularity: 10000 ms 00:25:36.431 00:25:36.431 NVM Command Set Attributes 00:25:36.431 ========================== 00:25:36.431 Submission Queue Entry Size 00:25:36.431 Max: 64 00:25:36.431 Min: 64 00:25:36.431 Completion Queue Entry Size 00:25:36.431 Max: 16 00:25:36.431 Min: 16 00:25:36.431 Number of Namespaces: 32 00:25:36.431 Compare Command: Supported 00:25:36.431 Write Uncorrectable Command: Not Supported 00:25:36.431 Dataset Management Command: Supported 00:25:36.431 Write Zeroes Command: Supported 00:25:36.431 Set Features Save Field: Not Supported 00:25:36.431 Reservations: Supported 00:25:36.431 Timestamp: Not Supported 00:25:36.431 Copy: Supported 00:25:36.431 Volatile Write Cache: Present 00:25:36.431 Atomic Write Unit (Normal): 1 00:25:36.431 Atomic Write Unit (PFail): 1 00:25:36.431 Atomic Compare & Write Unit: 1 00:25:36.431 Fused Compare & Write: Supported 00:25:36.431 Scatter-Gather List 00:25:36.431 SGL Command Set: Supported 00:25:36.431 SGL Keyed: Supported 00:25:36.431 SGL Bit Bucket Descriptor: Not Supported 00:25:36.431 SGL Metadata Pointer: Not Supported 00:25:36.431 Oversized SGL: Not Supported 00:25:36.431 SGL Metadata Address: Not Supported 00:25:36.431 SGL Offset: Supported 00:25:36.431 Transport SGL Data Block: Not Supported 00:25:36.431 Replay Protected Memory Block: Not Supported 00:25:36.431 00:25:36.431 Firmware Slot Information 00:25:36.431 ========================= 00:25:36.431 Active slot: 1 00:25:36.431 Slot 1 Firmware Revision: 24.01.1 00:25:36.431 00:25:36.431 00:25:36.431 Commands Supported and Effects 00:25:36.431 ============================== 00:25:36.431 Admin Commands 00:25:36.431 -------------- 00:25:36.431 Get Log Page (02h): Supported 00:25:36.431 Identify (06h): Supported 00:25:36.431 Abort (08h): Supported 00:25:36.431 Set Features (09h): Supported 00:25:36.431 Get Features (0Ah): Supported 00:25:36.431 Asynchronous Event Request (0Ch): Supported 00:25:36.431 Keep Alive (18h): Supported 00:25:36.431 I/O Commands 00:25:36.431 ------------ 00:25:36.431 Flush (00h): Supported LBA-Change 00:25:36.431 Write (01h): Supported LBA-Change 00:25:36.431 Read (02h): Supported 00:25:36.431 Compare (05h): Supported 00:25:36.431 Write Zeroes (08h): Supported LBA-Change 00:25:36.431 Dataset Management (09h): Supported LBA-Change 00:25:36.431 Copy (19h): Supported LBA-Change 00:25:36.431 Unknown (79h): Supported LBA-Change 00:25:36.431 Unknown (7Ah): Supported 00:25:36.431 00:25:36.431 Error Log 00:25:36.431 ========= 00:25:36.431 00:25:36.431 Arbitration 00:25:36.431 =========== 00:25:36.431 Arbitration Burst: 1 00:25:36.431 00:25:36.431 Power Management 00:25:36.431 ================ 00:25:36.431 Number of Power States: 1 00:25:36.431 Current Power State: Power State #0 00:25:36.431 Power State #0: 00:25:36.431 Max Power: 0.00 W 00:25:36.431 Non-Operational State: Operational 00:25:36.431 Entry Latency: Not Reported 00:25:36.431 Exit Latency: Not Reported 00:25:36.431 Relative Read Throughput: 0 00:25:36.431 Relative Read Latency: 0 00:25:36.432 Relative Write Throughput: 0 00:25:36.432 Relative Write Latency: 0 00:25:36.432 Idle Power: Not Reported 00:25:36.432 Active Power: Not Reported 00:25:36.432 Non-Operational Permissive Mode: Not Supported 00:25:36.432 00:25:36.432 Health Information 00:25:36.432 ================== 00:25:36.432 Critical Warnings: 00:25:36.432 Available Spare Space: OK 00:25:36.432 Temperature: OK 00:25:36.432 Device Reliability: OK 00:25:36.432 Read Only: No 00:25:36.432 Volatile Memory Backup: OK 00:25:36.432 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:36.432 Temperature Threshold: [2024-11-17 19:34:34.640105] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640117] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f642f0) 00:25:36.432 [2024-11-17 19:34:34.640133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.432 [2024-11-17 19:34:34.640154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd860, cid 7, qid 0 00:25:36.432 [2024-11-17 19:34:34.640309] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.432 [2024-11-17 19:34:34.640324] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.432 [2024-11-17 19:34:34.640331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640338] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd860) on tqpair=0x1f642f0 00:25:36.432 [2024-11-17 19:34:34.640378] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:36.432 [2024-11-17 19:34:34.640399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.432 [2024-11-17 19:34:34.640412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.432 [2024-11-17 19:34:34.640422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.432 [2024-11-17 19:34:34.640435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.432 [2024-11-17 19:34:34.640449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640457] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640463] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.432 [2024-11-17 19:34:34.640489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.432 [2024-11-17 19:34:34.640511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.432 [2024-11-17 19:34:34.640691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.432 [2024-11-17 19:34:34.640707] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.432 [2024-11-17 19:34:34.640714] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.432 [2024-11-17 19:34:34.640733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640747] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.432 [2024-11-17 19:34:34.640757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.432 [2024-11-17 19:34:34.640783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.432 [2024-11-17 19:34:34.640885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.432 [2024-11-17 19:34:34.640899] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.432 [2024-11-17 19:34:34.640906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.432 [2024-11-17 19:34:34.640921] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:36.432 [2024-11-17 19:34:34.640928] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:36.432 [2024-11-17 19:34:34.640944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.640959] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.432 [2024-11-17 19:34:34.640969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.432 [2024-11-17 19:34:34.640989] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.432 [2024-11-17 19:34:34.641073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.432 [2024-11-17 19:34:34.641086] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.432 [2024-11-17 19:34:34.641093] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641099] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.432 [2024-11-17 19:34:34.641116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.432 [2024-11-17 19:34:34.641142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.432 [2024-11-17 19:34:34.641162] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.432 [2024-11-17 19:34:34.641246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.432 [2024-11-17 19:34:34.641263] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.432 [2024-11-17 19:34:34.641271] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641277] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.432 [2024-11-17 19:34:34.641294] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641304] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.432 [2024-11-17 19:34:34.641320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.432 [2024-11-17 19:34:34.641340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.432 [2024-11-17 19:34:34.641421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.432 [2024-11-17 19:34:34.641434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.432 [2024-11-17 19:34:34.641441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641447] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.432 [2024-11-17 19:34:34.641464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.432 [2024-11-17 19:34:34.641490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.432 [2024-11-17 19:34:34.641509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.432 [2024-11-17 19:34:34.641593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.432 [2024-11-17 19:34:34.641606] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.432 [2024-11-17 19:34:34.641613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641620] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.432 [2024-11-17 19:34:34.641636] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641645] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641652] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.432 [2024-11-17 19:34:34.641662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.432 [2024-11-17 19:34:34.641688] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.432 [2024-11-17 19:34:34.641780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.432 [2024-11-17 19:34:34.641792] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.432 [2024-11-17 19:34:34.641798] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641804] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.432 [2024-11-17 19:34:34.641821] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.432 [2024-11-17 19:34:34.641836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.432 [2024-11-17 19:34:34.641847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.432 [2024-11-17 19:34:34.641866] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.432 [2024-11-17 19:34:34.641950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.433 [2024-11-17 19:34:34.641963] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.433 [2024-11-17 19:34:34.641973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.641980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.433 [2024-11-17 19:34:34.641998] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642007] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.433 [2024-11-17 19:34:34.642023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.433 [2024-11-17 19:34:34.642043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.433 [2024-11-17 19:34:34.642124] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.433 [2024-11-17 19:34:34.642138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.433 [2024-11-17 19:34:34.642144] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642151] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.433 [2024-11-17 19:34:34.642167] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642176] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642183] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.433 [2024-11-17 19:34:34.642193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.433 [2024-11-17 19:34:34.642212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.433 [2024-11-17 19:34:34.642293] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.433 [2024-11-17 19:34:34.642306] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.433 [2024-11-17 19:34:34.642313] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.433 [2024-11-17 19:34:34.642336] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642345] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.433 [2024-11-17 19:34:34.642362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.433 [2024-11-17 19:34:34.642382] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.433 [2024-11-17 19:34:34.642467] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.433 [2024-11-17 19:34:34.642480] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.433 [2024-11-17 19:34:34.642487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.433 [2024-11-17 19:34:34.642510] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642519] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642525] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.433 [2024-11-17 19:34:34.642535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.433 [2024-11-17 19:34:34.642555] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.433 [2024-11-17 19:34:34.642639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.433 [2024-11-17 19:34:34.642652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.433 [2024-11-17 19:34:34.642659] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.642669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.433 [2024-11-17 19:34:34.646698] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.646711] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.646717] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f642f0) 00:25:36.433 [2024-11-17 19:34:34.646728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.433 [2024-11-17 19:34:34.646748] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fbd2e0, cid 3, qid 0 00:25:36.433 [2024-11-17 19:34:34.646881] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:36.433 [2024-11-17 19:34:34.646895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:36.433 [2024-11-17 19:34:34.646902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:36.433 [2024-11-17 19:34:34.646908] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fbd2e0) on tqpair=0x1f642f0 00:25:36.433 [2024-11-17 19:34:34.646921] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:36.433 0 Kelvin (-273 Celsius) 00:25:36.433 Available Spare: 0% 00:25:36.433 Available Spare Threshold: 0% 00:25:36.433 Life Percentage Used: 0% 00:25:36.433 Data Units Read: 0 00:25:36.433 Data Units Written: 0 00:25:36.433 Host Read Commands: 0 00:25:36.433 Host Write Commands: 0 00:25:36.433 Controller Busy Time: 0 minutes 00:25:36.433 Power Cycles: 0 00:25:36.433 Power On Hours: 0 hours 00:25:36.433 Unsafe Shutdowns: 0 00:25:36.433 Unrecoverable Media Errors: 0 00:25:36.433 Lifetime Error Log Entries: 0 00:25:36.433 Warning Temperature Time: 0 minutes 00:25:36.433 Critical Temperature Time: 0 minutes 00:25:36.433 00:25:36.433 Number of Queues 00:25:36.433 ================ 00:25:36.433 Number of I/O Submission Queues: 127 00:25:36.433 Number of I/O Completion Queues: 127 00:25:36.433 00:25:36.433 Active Namespaces 00:25:36.433 ================= 00:25:36.433 Namespace ID:1 00:25:36.433 Error Recovery Timeout: Unlimited 00:25:36.433 Command Set Identifier: NVM (00h) 00:25:36.433 Deallocate: Supported 00:25:36.433 Deallocated/Unwritten Error: Not Supported 00:25:36.433 Deallocated Read Value: Unknown 00:25:36.433 Deallocate in Write Zeroes: Not Supported 00:25:36.433 Deallocated Guard Field: 0xFFFF 00:25:36.433 Flush: Supported 00:25:36.433 Reservation: Supported 00:25:36.433 Namespace Sharing Capabilities: Multiple Controllers 00:25:36.433 Size (in LBAs): 131072 (0GiB) 00:25:36.433 Capacity (in LBAs): 131072 (0GiB) 00:25:36.433 Utilization (in LBAs): 131072 (0GiB) 00:25:36.433 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:36.433 EUI64: ABCDEF0123456789 00:25:36.433 UUID: 37e0ad51-57f5-4159-ba52-7ac97c0761f1 00:25:36.433 Thin Provisioning: Not Supported 00:25:36.433 Per-NS Atomic Units: Yes 00:25:36.433 Atomic Boundary Size (Normal): 0 00:25:36.433 Atomic Boundary Size (PFail): 0 00:25:36.433 Atomic Boundary Offset: 0 00:25:36.433 Maximum Single Source Range Length: 65535 00:25:36.433 Maximum Copy Length: 65535 00:25:36.433 Maximum Source Range Count: 1 00:25:36.433 NGUID/EUI64 Never Reused: No 00:25:36.433 Namespace Write Protected: No 00:25:36.433 Number of LBA Formats: 1 00:25:36.433 Current LBA Format: LBA Format #00 00:25:36.433 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:36.433 00:25:36.433 19:34:34 -- host/identify.sh@51 -- # sync 00:25:36.433 19:34:34 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:36.433 19:34:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.433 19:34:34 -- common/autotest_common.sh@10 -- # set +x 00:25:36.433 19:34:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.433 19:34:34 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:36.433 19:34:34 -- host/identify.sh@56 -- # nvmftestfini 00:25:36.433 19:34:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:36.433 19:34:34 -- nvmf/common.sh@116 -- # sync 00:25:36.433 19:34:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:36.433 19:34:34 -- nvmf/common.sh@119 -- # set +e 00:25:36.433 19:34:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:36.433 19:34:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:36.433 rmmod nvme_tcp 00:25:36.433 rmmod nvme_fabrics 00:25:36.691 rmmod nvme_keyring 00:25:36.691 19:34:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:36.691 19:34:34 -- nvmf/common.sh@123 -- # set -e 00:25:36.691 19:34:34 -- nvmf/common.sh@124 -- # return 0 00:25:36.691 19:34:34 -- nvmf/common.sh@477 -- # '[' -n 1285733 ']' 00:25:36.691 19:34:34 -- nvmf/common.sh@478 -- # killprocess 1285733 00:25:36.691 19:34:34 -- common/autotest_common.sh@936 -- # '[' -z 1285733 ']' 00:25:36.691 19:34:34 -- common/autotest_common.sh@940 -- # kill -0 1285733 00:25:36.691 19:34:34 -- common/autotest_common.sh@941 -- # uname 00:25:36.691 19:34:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:36.691 19:34:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1285733 00:25:36.691 19:34:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:36.691 19:34:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:36.691 19:34:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1285733' 00:25:36.691 killing process with pid 1285733 00:25:36.691 19:34:34 -- common/autotest_common.sh@955 -- # kill 1285733 00:25:36.691 [2024-11-17 19:34:34.764858] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:36.691 19:34:34 -- common/autotest_common.sh@960 -- # wait 1285733 00:25:36.950 19:34:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:36.950 19:34:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:36.950 19:34:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:36.951 19:34:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.951 19:34:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:36.951 19:34:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.951 19:34:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.951 19:34:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.853 19:34:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:38.853 00:25:38.853 real 0m6.407s 00:25:38.853 user 0m8.008s 00:25:38.853 sys 0m2.068s 00:25:38.853 19:34:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:38.853 19:34:37 -- common/autotest_common.sh@10 -- # set +x 00:25:38.853 ************************************ 00:25:38.853 END TEST nvmf_identify 00:25:38.853 ************************************ 00:25:38.853 19:34:37 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:38.853 19:34:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:38.853 19:34:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:38.853 19:34:37 -- common/autotest_common.sh@10 -- # set +x 00:25:38.853 ************************************ 00:25:38.853 START TEST nvmf_perf 00:25:38.853 ************************************ 00:25:38.853 19:34:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:39.113 * Looking for test storage... 00:25:39.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:39.113 19:34:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:39.113 19:34:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:39.113 19:34:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:39.113 19:34:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:39.113 19:34:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:39.113 19:34:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:39.113 19:34:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:39.113 19:34:37 -- scripts/common.sh@335 -- # IFS=.-: 00:25:39.113 19:34:37 -- scripts/common.sh@335 -- # read -ra ver1 00:25:39.113 19:34:37 -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.113 19:34:37 -- scripts/common.sh@336 -- # read -ra ver2 00:25:39.113 19:34:37 -- scripts/common.sh@337 -- # local 'op=<' 00:25:39.113 19:34:37 -- scripts/common.sh@339 -- # ver1_l=2 00:25:39.113 19:34:37 -- scripts/common.sh@340 -- # ver2_l=1 00:25:39.113 19:34:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:39.113 19:34:37 -- scripts/common.sh@343 -- # case "$op" in 00:25:39.113 19:34:37 -- scripts/common.sh@344 -- # : 1 00:25:39.113 19:34:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:39.113 19:34:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.113 19:34:37 -- scripts/common.sh@364 -- # decimal 1 00:25:39.113 19:34:37 -- scripts/common.sh@352 -- # local d=1 00:25:39.113 19:34:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.113 19:34:37 -- scripts/common.sh@354 -- # echo 1 00:25:39.113 19:34:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:39.113 19:34:37 -- scripts/common.sh@365 -- # decimal 2 00:25:39.113 19:34:37 -- scripts/common.sh@352 -- # local d=2 00:25:39.113 19:34:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.113 19:34:37 -- scripts/common.sh@354 -- # echo 2 00:25:39.113 19:34:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:39.113 19:34:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:39.113 19:34:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:39.113 19:34:37 -- scripts/common.sh@367 -- # return 0 00:25:39.113 19:34:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.113 19:34:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.113 --rc genhtml_branch_coverage=1 00:25:39.113 --rc genhtml_function_coverage=1 00:25:39.113 --rc genhtml_legend=1 00:25:39.113 --rc geninfo_all_blocks=1 00:25:39.113 --rc geninfo_unexecuted_blocks=1 00:25:39.113 00:25:39.113 ' 00:25:39.113 19:34:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.113 --rc genhtml_branch_coverage=1 00:25:39.113 --rc genhtml_function_coverage=1 00:25:39.113 --rc genhtml_legend=1 00:25:39.113 --rc geninfo_all_blocks=1 00:25:39.113 --rc geninfo_unexecuted_blocks=1 00:25:39.113 00:25:39.113 ' 00:25:39.113 19:34:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.113 --rc genhtml_branch_coverage=1 00:25:39.113 --rc genhtml_function_coverage=1 00:25:39.113 --rc genhtml_legend=1 00:25:39.113 --rc geninfo_all_blocks=1 00:25:39.113 --rc geninfo_unexecuted_blocks=1 00:25:39.113 00:25:39.113 ' 00:25:39.113 19:34:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.113 --rc genhtml_branch_coverage=1 00:25:39.113 --rc genhtml_function_coverage=1 00:25:39.113 --rc genhtml_legend=1 00:25:39.113 --rc geninfo_all_blocks=1 00:25:39.113 --rc geninfo_unexecuted_blocks=1 00:25:39.113 00:25:39.113 ' 00:25:39.113 19:34:37 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.113 19:34:37 -- nvmf/common.sh@7 -- # uname -s 00:25:39.113 19:34:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.113 19:34:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.113 19:34:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.113 19:34:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.113 19:34:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.113 19:34:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.113 19:34:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.113 19:34:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.113 19:34:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.113 19:34:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.113 19:34:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.113 19:34:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.113 19:34:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.113 19:34:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.113 19:34:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.113 19:34:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.113 19:34:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.113 19:34:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.113 19:34:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.114 19:34:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.114 19:34:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.114 19:34:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.114 19:34:37 -- paths/export.sh@5 -- # export PATH 00:25:39.114 19:34:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.114 19:34:37 -- nvmf/common.sh@46 -- # : 0 00:25:39.114 19:34:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:39.114 19:34:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:39.114 19:34:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:39.114 19:34:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.114 19:34:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.114 19:34:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:39.114 19:34:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:39.114 19:34:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:39.114 19:34:37 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:39.114 19:34:37 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:39.114 19:34:37 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:39.114 19:34:37 -- host/perf.sh@17 -- # nvmftestinit 00:25:39.114 19:34:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:39.114 19:34:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.114 19:34:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:39.114 19:34:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:39.114 19:34:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:39.114 19:34:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.114 19:34:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.114 19:34:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.114 19:34:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:39.114 19:34:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:39.114 19:34:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:39.114 19:34:37 -- common/autotest_common.sh@10 -- # set +x 00:25:41.017 19:34:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:41.017 19:34:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:41.017 19:34:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:41.017 19:34:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:41.274 19:34:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:41.274 19:34:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:41.274 19:34:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:41.274 19:34:39 -- nvmf/common.sh@294 -- # net_devs=() 00:25:41.274 19:34:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:41.275 19:34:39 -- nvmf/common.sh@295 -- # e810=() 00:25:41.275 19:34:39 -- nvmf/common.sh@295 -- # local -ga e810 00:25:41.275 19:34:39 -- nvmf/common.sh@296 -- # x722=() 00:25:41.275 19:34:39 -- nvmf/common.sh@296 -- # local -ga x722 00:25:41.275 19:34:39 -- nvmf/common.sh@297 -- # mlx=() 00:25:41.275 19:34:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:41.275 19:34:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.275 19:34:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:41.275 19:34:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:41.275 19:34:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:41.275 19:34:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:41.275 19:34:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:41.275 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:41.275 19:34:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:41.275 19:34:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:41.275 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:41.275 19:34:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:41.275 19:34:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:41.275 19:34:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.275 19:34:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:41.275 19:34:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.275 19:34:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:41.275 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:41.275 19:34:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.275 19:34:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:41.275 19:34:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.275 19:34:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:41.275 19:34:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.275 19:34:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:41.275 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:41.275 19:34:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.275 19:34:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:41.275 19:34:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:41.275 19:34:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:41.275 19:34:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.275 19:34:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.275 19:34:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.275 19:34:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:41.275 19:34:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.275 19:34:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.275 19:34:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:41.275 19:34:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.275 19:34:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.275 19:34:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:41.275 19:34:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:41.275 19:34:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.275 19:34:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.275 19:34:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.275 19:34:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.275 19:34:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:41.275 19:34:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.275 19:34:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.275 19:34:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.275 19:34:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:41.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:25:41.275 00:25:41.275 --- 10.0.0.2 ping statistics --- 00:25:41.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.275 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:25:41.275 19:34:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:25:41.275 00:25:41.275 --- 10.0.0.1 ping statistics --- 00:25:41.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.275 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:25:41.275 19:34:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.275 19:34:39 -- nvmf/common.sh@410 -- # return 0 00:25:41.275 19:34:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:41.275 19:34:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.275 19:34:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:41.275 19:34:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.275 19:34:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:41.275 19:34:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:41.275 19:34:39 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:41.275 19:34:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:41.275 19:34:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.275 19:34:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.275 19:34:39 -- nvmf/common.sh@469 -- # nvmfpid=1287846 00:25:41.275 19:34:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:41.275 19:34:39 -- nvmf/common.sh@470 -- # waitforlisten 1287846 00:25:41.275 19:34:39 -- common/autotest_common.sh@829 -- # '[' -z 1287846 ']' 00:25:41.275 19:34:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.275 19:34:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.275 19:34:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.275 19:34:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.275 19:34:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.275 [2024-11-17 19:34:39.495876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:41.275 [2024-11-17 19:34:39.495944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.275 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.534 [2024-11-17 19:34:39.564016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.534 [2024-11-17 19:34:39.651411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:41.534 [2024-11-17 19:34:39.651564] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.534 [2024-11-17 19:34:39.651582] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.534 [2024-11-17 19:34:39.651594] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.534 [2024-11-17 19:34:39.651651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.534 [2024-11-17 19:34:39.651729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.534 [2024-11-17 19:34:39.651766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.534 [2024-11-17 19:34:39.651769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.525 19:34:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.525 19:34:40 -- common/autotest_common.sh@862 -- # return 0 00:25:42.525 19:34:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:42.525 19:34:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.525 19:34:40 -- common/autotest_common.sh@10 -- # set +x 00:25:42.525 19:34:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.525 19:34:40 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:42.525 19:34:40 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:45.805 19:34:43 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:45.805 19:34:43 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:45.805 19:34:43 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:45.805 19:34:43 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:46.062 19:34:44 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:46.062 19:34:44 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:46.062 19:34:44 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:46.062 19:34:44 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:46.062 19:34:44 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:46.318 [2024-11-17 19:34:44.483939] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.318 19:34:44 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.574 19:34:44 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:46.574 19:34:44 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:46.831 19:34:45 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:46.831 19:34:45 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:47.087 19:34:45 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.343 [2024-11-17 19:34:45.495787] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.343 19:34:45 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:47.600 19:34:45 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:47.600 19:34:45 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:47.600 19:34:45 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:47.600 19:34:45 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:48.971 Initializing NVMe Controllers 00:25:48.971 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:48.971 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:48.971 Initialization complete. Launching workers. 00:25:48.971 ======================================================== 00:25:48.971 Latency(us) 00:25:48.971 Device Information : IOPS MiB/s Average min max 00:25:48.971 PCIE (0000:88:00.0) NSID 1 from core 0: 85240.05 332.97 375.02 48.68 7287.31 00:25:48.971 ======================================================== 00:25:48.971 Total : 85240.05 332.97 375.02 48.68 7287.31 00:25:48.971 00:25:48.971 19:34:47 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:48.971 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.343 Initializing NVMe Controllers 00:25:50.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:50.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:50.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:50.343 Initialization complete. Launching workers. 00:25:50.343 ======================================================== 00:25:50.343 Latency(us) 00:25:50.343 Device Information : IOPS MiB/s Average min max 00:25:50.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.70 0.33 12006.20 132.89 45001.79 00:25:50.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 62.78 0.25 16181.47 5999.73 47904.96 00:25:50.343 ======================================================== 00:25:50.343 Total : 147.48 0.58 13783.51 132.89 47904.96 00:25:50.343 00:25:50.343 19:34:48 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:50.343 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.715 Initializing NVMe Controllers 00:25:51.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:51.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:51.715 Initialization complete. Launching workers. 00:25:51.715 ======================================================== 00:25:51.715 Latency(us) 00:25:51.715 Device Information : IOPS MiB/s Average min max 00:25:51.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8565.00 33.46 3737.28 527.63 10056.33 00:25:51.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3865.46 15.10 8278.92 5113.35 19107.08 00:25:51.715 ======================================================== 00:25:51.715 Total : 12430.46 48.56 5149.58 527.63 19107.08 00:25:51.715 00:25:51.715 19:34:49 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:51.715 19:34:49 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:51.715 19:34:49 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:51.715 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.243 Initializing NVMe Controllers 00:25:54.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:54.243 Controller IO queue size 128, less than required. 00:25:54.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.243 Controller IO queue size 128, less than required. 00:25:54.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:54.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:54.243 Initialization complete. Launching workers. 00:25:54.243 ======================================================== 00:25:54.243 Latency(us) 00:25:54.243 Device Information : IOPS MiB/s Average min max 00:25:54.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1554.05 388.51 84281.25 49308.46 136729.83 00:25:54.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 566.15 141.54 229279.13 79139.20 369841.14 00:25:54.243 ======================================================== 00:25:54.243 Total : 2120.20 530.05 122999.75 49308.46 369841.14 00:25:54.243 00:25:54.243 19:34:52 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:54.243 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.243 No valid NVMe controllers or AIO or URING devices found 00:25:54.243 Initializing NVMe Controllers 00:25:54.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:54.243 Controller IO queue size 128, less than required. 00:25:54.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.243 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:54.243 Controller IO queue size 128, less than required. 00:25:54.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.243 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:54.243 WARNING: Some requested NVMe devices were skipped 00:25:54.243 19:34:52 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:54.243 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.527 Initializing NVMe Controllers 00:25:57.527 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:57.527 Controller IO queue size 128, less than required. 00:25:57.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:57.527 Controller IO queue size 128, less than required. 00:25:57.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:57.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:57.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:57.527 Initialization complete. Launching workers. 00:25:57.527 00:25:57.527 ==================== 00:25:57.527 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:57.527 TCP transport: 00:25:57.527 polls: 10966 00:25:57.527 idle_polls: 8176 00:25:57.527 sock_completions: 2790 00:25:57.527 nvme_completions: 5511 00:25:57.527 submitted_requests: 8397 00:25:57.527 queued_requests: 1 00:25:57.527 00:25:57.527 ==================== 00:25:57.527 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:57.527 TCP transport: 00:25:57.527 polls: 11093 00:25:57.527 idle_polls: 8304 00:25:57.527 sock_completions: 2789 00:25:57.527 nvme_completions: 5446 00:25:57.527 submitted_requests: 8285 00:25:57.527 queued_requests: 1 00:25:57.527 ======================================================== 00:25:57.527 Latency(us) 00:25:57.527 Device Information : IOPS MiB/s Average min max 00:25:57.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1441.27 360.32 91337.56 63983.91 158824.57 00:25:57.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1424.77 356.19 90780.88 40097.66 126838.23 00:25:57.527 ======================================================== 00:25:57.527 Total : 2866.04 716.51 91060.82 40097.66 158824.57 00:25:57.527 00:25:57.527 19:34:55 -- host/perf.sh@66 -- # sync 00:25:57.527 19:34:55 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.527 19:34:55 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:57.527 19:34:55 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:25:57.527 19:34:55 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:00.806 19:34:58 -- host/perf.sh@72 -- # ls_guid=d3985447-3b69-4fb5-9608-8ce3923c6bb6 00:26:00.806 19:34:58 -- host/perf.sh@73 -- # get_lvs_free_mb d3985447-3b69-4fb5-9608-8ce3923c6bb6 00:26:00.806 19:34:58 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d3985447-3b69-4fb5-9608-8ce3923c6bb6 00:26:00.806 19:34:58 -- common/autotest_common.sh@1354 -- # local lvs_info 00:26:00.806 19:34:58 -- common/autotest_common.sh@1355 -- # local fc 00:26:00.806 19:34:58 -- common/autotest_common.sh@1356 -- # local cs 00:26:00.806 19:34:58 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:00.806 19:34:58 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:26:00.806 { 00:26:00.806 "uuid": "d3985447-3b69-4fb5-9608-8ce3923c6bb6", 00:26:00.806 "name": "lvs_0", 00:26:00.806 "base_bdev": "Nvme0n1", 00:26:00.806 "total_data_clusters": 238234, 00:26:00.806 "free_clusters": 238234, 00:26:00.806 "block_size": 512, 00:26:00.806 "cluster_size": 4194304 00:26:00.806 } 00:26:00.806 ]' 00:26:00.806 19:34:58 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d3985447-3b69-4fb5-9608-8ce3923c6bb6") .free_clusters' 00:26:00.806 19:34:58 -- common/autotest_common.sh@1358 -- # fc=238234 00:26:00.806 19:34:58 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d3985447-3b69-4fb5-9608-8ce3923c6bb6") .cluster_size' 00:26:00.806 19:34:58 -- common/autotest_common.sh@1359 -- # cs=4194304 00:26:00.806 19:34:58 -- common/autotest_common.sh@1362 -- # free_mb=952936 00:26:00.806 19:34:58 -- common/autotest_common.sh@1363 -- # echo 952936 00:26:00.806 952936 00:26:00.806 19:34:58 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:26:00.806 19:34:58 -- host/perf.sh@78 -- # free_mb=20480 00:26:00.807 19:34:58 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3985447-3b69-4fb5-9608-8ce3923c6bb6 lbd_0 20480 00:26:01.390 19:34:59 -- host/perf.sh@80 -- # lb_guid=a6d15ab7-8821-43bf-9c70-119034653bea 00:26:01.390 19:34:59 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore a6d15ab7-8821-43bf-9c70-119034653bea lvs_n_0 00:26:02.321 19:35:00 -- host/perf.sh@83 -- # ls_nested_guid=c1c261e3-21d8-4a61-9454-62d294838eb0 00:26:02.321 19:35:00 -- host/perf.sh@84 -- # get_lvs_free_mb c1c261e3-21d8-4a61-9454-62d294838eb0 00:26:02.321 19:35:00 -- common/autotest_common.sh@1353 -- # local lvs_uuid=c1c261e3-21d8-4a61-9454-62d294838eb0 00:26:02.321 19:35:00 -- common/autotest_common.sh@1354 -- # local lvs_info 00:26:02.321 19:35:00 -- common/autotest_common.sh@1355 -- # local fc 00:26:02.321 19:35:00 -- common/autotest_common.sh@1356 -- # local cs 00:26:02.321 19:35:00 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:02.321 19:35:00 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:26:02.321 { 00:26:02.321 "uuid": "d3985447-3b69-4fb5-9608-8ce3923c6bb6", 00:26:02.321 "name": "lvs_0", 00:26:02.321 "base_bdev": "Nvme0n1", 00:26:02.321 "total_data_clusters": 238234, 00:26:02.321 "free_clusters": 233114, 00:26:02.321 "block_size": 512, 00:26:02.321 "cluster_size": 4194304 00:26:02.321 }, 00:26:02.321 { 00:26:02.321 "uuid": "c1c261e3-21d8-4a61-9454-62d294838eb0", 00:26:02.321 "name": "lvs_n_0", 00:26:02.321 "base_bdev": "a6d15ab7-8821-43bf-9c70-119034653bea", 00:26:02.321 "total_data_clusters": 5114, 00:26:02.321 "free_clusters": 5114, 00:26:02.321 "block_size": 512, 00:26:02.321 "cluster_size": 4194304 00:26:02.321 } 00:26:02.321 ]' 00:26:02.321 19:35:00 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="c1c261e3-21d8-4a61-9454-62d294838eb0") .free_clusters' 00:26:02.579 19:35:00 -- common/autotest_common.sh@1358 -- # fc=5114 00:26:02.579 19:35:00 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="c1c261e3-21d8-4a61-9454-62d294838eb0") .cluster_size' 00:26:02.579 19:35:00 -- common/autotest_common.sh@1359 -- # cs=4194304 00:26:02.579 19:35:00 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:26:02.579 19:35:00 -- common/autotest_common.sh@1363 -- # echo 20456 00:26:02.579 20456 00:26:02.579 19:35:00 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:02.579 19:35:00 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c1c261e3-21d8-4a61-9454-62d294838eb0 lbd_nest_0 20456 00:26:02.837 19:35:00 -- host/perf.sh@88 -- # lb_nested_guid=ce454c69-c39f-4828-9d3a-644c8762c128 00:26:02.837 19:35:00 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.094 19:35:01 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:03.094 19:35:01 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ce454c69-c39f-4828-9d3a-644c8762c128 00:26:03.352 19:35:01 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.611 19:35:01 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:03.611 19:35:01 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:03.611 19:35:01 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:03.611 19:35:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:03.611 19:35:01 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:03.611 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.806 Initializing NVMe Controllers 00:26:15.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:15.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:15.806 Initialization complete. Launching workers. 00:26:15.806 ======================================================== 00:26:15.806 Latency(us) 00:26:15.806 Device Information : IOPS MiB/s Average min max 00:26:15.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.40 0.02 22570.36 166.30 46872.67 00:26:15.806 ======================================================== 00:26:15.806 Total : 44.40 0.02 22570.36 166.30 46872.67 00:26:15.806 00:26:15.806 19:35:12 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:15.806 19:35:12 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:15.806 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.769 Initializing NVMe Controllers 00:26:25.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:25.769 Initialization complete. Launching workers. 00:26:25.769 ======================================================== 00:26:25.769 Latency(us) 00:26:25.769 Device Information : IOPS MiB/s Average min max 00:26:25.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.20 9.40 13315.38 4979.74 47945.87 00:26:25.770 ======================================================== 00:26:25.770 Total : 75.20 9.40 13315.38 4979.74 47945.87 00:26:25.770 00:26:25.770 19:35:22 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:25.770 19:35:22 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:25.770 19:35:22 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:25.770 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.732 Initializing NVMe Controllers 00:26:35.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:35.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:35.732 Initialization complete. Launching workers. 00:26:35.732 ======================================================== 00:26:35.732 Latency(us) 00:26:35.732 Device Information : IOPS MiB/s Average min max 00:26:35.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7900.00 3.86 4050.77 273.16 11575.53 00:26:35.732 ======================================================== 00:26:35.732 Total : 7900.00 3.86 4050.77 273.16 11575.53 00:26:35.733 00:26:35.733 19:35:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:35.733 19:35:32 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.733 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.701 Initializing NVMe Controllers 00:26:45.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:45.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:45.701 Initialization complete. Launching workers. 00:26:45.701 ======================================================== 00:26:45.701 Latency(us) 00:26:45.701 Device Information : IOPS MiB/s Average min max 00:26:45.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4061.30 507.66 7878.85 611.12 18360.52 00:26:45.701 ======================================================== 00:26:45.701 Total : 4061.30 507.66 7878.85 611.12 18360.52 00:26:45.701 00:26:45.701 19:35:42 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:45.701 19:35:42 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:45.701 19:35:42 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:45.701 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.742 Initializing NVMe Controllers 00:26:55.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.742 Controller IO queue size 128, less than required. 00:26:55.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:55.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:55.742 Initialization complete. Launching workers. 00:26:55.742 ======================================================== 00:26:55.742 Latency(us) 00:26:55.742 Device Information : IOPS MiB/s Average min max 00:26:55.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12030.59 5.87 10646.75 1851.97 26768.52 00:26:55.742 ======================================================== 00:26:55.742 Total : 12030.59 5.87 10646.75 1851.97 26768.52 00:26:55.742 00:26:55.742 19:35:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:55.742 19:35:53 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:55.742 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.708 Initializing NVMe Controllers 00:27:05.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:05.708 Controller IO queue size 128, less than required. 00:27:05.708 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:05.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:05.708 Initialization complete. Launching workers. 00:27:05.708 ======================================================== 00:27:05.708 Latency(us) 00:27:05.708 Device Information : IOPS MiB/s Average min max 00:27:05.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1203.50 150.44 106860.36 23470.89 220250.84 00:27:05.708 ======================================================== 00:27:05.708 Total : 1203.50 150.44 106860.36 23470.89 220250.84 00:27:05.708 00:27:05.708 19:36:03 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.708 19:36:03 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ce454c69-c39f-4828-9d3a-644c8762c128 00:27:06.274 19:36:04 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:06.532 19:36:04 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6d15ab7-8821-43bf-9c70-119034653bea 00:27:07.098 19:36:05 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:07.098 19:36:05 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:07.098 19:36:05 -- host/perf.sh@114 -- # nvmftestfini 00:27:07.098 19:36:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:07.098 19:36:05 -- nvmf/common.sh@116 -- # sync 00:27:07.357 19:36:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:07.357 19:36:05 -- nvmf/common.sh@119 -- # set +e 00:27:07.357 19:36:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:07.357 19:36:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:07.357 rmmod nvme_tcp 00:27:07.357 rmmod nvme_fabrics 00:27:07.357 rmmod nvme_keyring 00:27:07.357 19:36:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:07.357 19:36:05 -- nvmf/common.sh@123 -- # set -e 00:27:07.357 19:36:05 -- nvmf/common.sh@124 -- # return 0 00:27:07.357 19:36:05 -- nvmf/common.sh@477 -- # '[' -n 1287846 ']' 00:27:07.357 19:36:05 -- nvmf/common.sh@478 -- # killprocess 1287846 00:27:07.357 19:36:05 -- common/autotest_common.sh@936 -- # '[' -z 1287846 ']' 00:27:07.357 19:36:05 -- common/autotest_common.sh@940 -- # kill -0 1287846 00:27:07.357 19:36:05 -- common/autotest_common.sh@941 -- # uname 00:27:07.357 19:36:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:07.357 19:36:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1287846 00:27:07.357 19:36:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:07.357 19:36:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:07.357 19:36:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1287846' 00:27:07.357 killing process with pid 1287846 00:27:07.357 19:36:05 -- common/autotest_common.sh@955 -- # kill 1287846 00:27:07.357 19:36:05 -- common/autotest_common.sh@960 -- # wait 1287846 00:27:09.256 19:36:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:09.256 19:36:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:09.256 19:36:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:09.256 19:36:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.256 19:36:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:09.256 19:36:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.256 19:36:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.256 19:36:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.158 19:36:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:11.158 00:27:11.158 real 1m31.993s 00:27:11.158 user 5m40.684s 00:27:11.158 sys 0m15.476s 00:27:11.158 19:36:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:11.158 19:36:09 -- common/autotest_common.sh@10 -- # set +x 00:27:11.158 ************************************ 00:27:11.158 END TEST nvmf_perf 00:27:11.158 ************************************ 00:27:11.158 19:36:09 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:11.158 19:36:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:11.158 19:36:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:11.158 19:36:09 -- common/autotest_common.sh@10 -- # set +x 00:27:11.158 ************************************ 00:27:11.158 START TEST nvmf_fio_host 00:27:11.158 ************************************ 00:27:11.158 19:36:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:11.158 * Looking for test storage... 00:27:11.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.158 19:36:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:11.158 19:36:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:11.158 19:36:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:11.158 19:36:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:11.158 19:36:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:11.158 19:36:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:11.158 19:36:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:11.158 19:36:09 -- scripts/common.sh@335 -- # IFS=.-: 00:27:11.158 19:36:09 -- scripts/common.sh@335 -- # read -ra ver1 00:27:11.158 19:36:09 -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.158 19:36:09 -- scripts/common.sh@336 -- # read -ra ver2 00:27:11.158 19:36:09 -- scripts/common.sh@337 -- # local 'op=<' 00:27:11.158 19:36:09 -- scripts/common.sh@339 -- # ver1_l=2 00:27:11.158 19:36:09 -- scripts/common.sh@340 -- # ver2_l=1 00:27:11.158 19:36:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:11.158 19:36:09 -- scripts/common.sh@343 -- # case "$op" in 00:27:11.158 19:36:09 -- scripts/common.sh@344 -- # : 1 00:27:11.158 19:36:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:11.158 19:36:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.158 19:36:09 -- scripts/common.sh@364 -- # decimal 1 00:27:11.158 19:36:09 -- scripts/common.sh@352 -- # local d=1 00:27:11.158 19:36:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.159 19:36:09 -- scripts/common.sh@354 -- # echo 1 00:27:11.159 19:36:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:11.159 19:36:09 -- scripts/common.sh@365 -- # decimal 2 00:27:11.159 19:36:09 -- scripts/common.sh@352 -- # local d=2 00:27:11.159 19:36:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.159 19:36:09 -- scripts/common.sh@354 -- # echo 2 00:27:11.159 19:36:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:11.159 19:36:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:11.159 19:36:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:11.159 19:36:09 -- scripts/common.sh@367 -- # return 0 00:27:11.159 19:36:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.159 19:36:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:11.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.159 --rc genhtml_branch_coverage=1 00:27:11.159 --rc genhtml_function_coverage=1 00:27:11.159 --rc genhtml_legend=1 00:27:11.159 --rc geninfo_all_blocks=1 00:27:11.159 --rc geninfo_unexecuted_blocks=1 00:27:11.159 00:27:11.159 ' 00:27:11.159 19:36:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:11.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.159 --rc genhtml_branch_coverage=1 00:27:11.159 --rc genhtml_function_coverage=1 00:27:11.159 --rc genhtml_legend=1 00:27:11.159 --rc geninfo_all_blocks=1 00:27:11.159 --rc geninfo_unexecuted_blocks=1 00:27:11.159 00:27:11.159 ' 00:27:11.159 19:36:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:11.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.159 --rc genhtml_branch_coverage=1 00:27:11.159 --rc genhtml_function_coverage=1 00:27:11.159 --rc genhtml_legend=1 00:27:11.159 --rc geninfo_all_blocks=1 00:27:11.159 --rc geninfo_unexecuted_blocks=1 00:27:11.159 00:27:11.159 ' 00:27:11.159 19:36:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:11.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.159 --rc genhtml_branch_coverage=1 00:27:11.159 --rc genhtml_function_coverage=1 00:27:11.159 --rc genhtml_legend=1 00:27:11.159 --rc geninfo_all_blocks=1 00:27:11.159 --rc geninfo_unexecuted_blocks=1 00:27:11.159 00:27:11.159 ' 00:27:11.159 19:36:09 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.159 19:36:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.159 19:36:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.159 19:36:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.159 19:36:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.159 19:36:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.159 19:36:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.159 19:36:09 -- paths/export.sh@5 -- # export PATH 00:27:11.159 19:36:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.159 19:36:09 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.159 19:36:09 -- nvmf/common.sh@7 -- # uname -s 00:27:11.159 19:36:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.159 19:36:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.159 19:36:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.159 19:36:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.159 19:36:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.159 19:36:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.159 19:36:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.159 19:36:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.159 19:36:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.159 19:36:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.159 19:36:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.159 19:36:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.159 19:36:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.159 19:36:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.159 19:36:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.159 19:36:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.159 19:36:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.159 19:36:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.159 19:36:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.159 19:36:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.159 19:36:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.159 19:36:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.159 19:36:09 -- paths/export.sh@5 -- # export PATH 00:27:11.159 19:36:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.159 19:36:09 -- nvmf/common.sh@46 -- # : 0 00:27:11.159 19:36:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:11.159 19:36:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:11.159 19:36:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:11.159 19:36:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.159 19:36:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.159 19:36:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:11.159 19:36:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:11.159 19:36:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:11.159 19:36:09 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:11.159 19:36:09 -- host/fio.sh@14 -- # nvmftestinit 00:27:11.159 19:36:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:11.159 19:36:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.159 19:36:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:11.159 19:36:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:11.159 19:36:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:11.159 19:36:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.159 19:36:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:11.159 19:36:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.159 19:36:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:11.159 19:36:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:11.159 19:36:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:11.159 19:36:09 -- common/autotest_common.sh@10 -- # set +x 00:27:13.063 19:36:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:13.063 19:36:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:13.063 19:36:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:13.063 19:36:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:13.063 19:36:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:13.063 19:36:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:13.063 19:36:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:13.063 19:36:11 -- nvmf/common.sh@294 -- # net_devs=() 00:27:13.063 19:36:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:13.063 19:36:11 -- nvmf/common.sh@295 -- # e810=() 00:27:13.063 19:36:11 -- nvmf/common.sh@295 -- # local -ga e810 00:27:13.063 19:36:11 -- nvmf/common.sh@296 -- # x722=() 00:27:13.063 19:36:11 -- nvmf/common.sh@296 -- # local -ga x722 00:27:13.063 19:36:11 -- nvmf/common.sh@297 -- # mlx=() 00:27:13.063 19:36:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:13.063 19:36:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.063 19:36:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:13.063 19:36:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:13.063 19:36:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:13.063 19:36:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:13.063 19:36:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:13.063 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:13.063 19:36:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:13.063 19:36:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:13.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:13.063 19:36:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:13.063 19:36:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:13.063 19:36:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.063 19:36:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:13.063 19:36:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.063 19:36:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:13.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:13.063 19:36:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.063 19:36:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:13.063 19:36:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.063 19:36:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:13.063 19:36:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.063 19:36:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:13.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:13.063 19:36:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.063 19:36:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:13.063 19:36:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:13.063 19:36:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:13.063 19:36:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:13.063 19:36:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.063 19:36:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.063 19:36:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.063 19:36:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:13.063 19:36:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.063 19:36:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.063 19:36:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:13.063 19:36:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.063 19:36:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.063 19:36:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:13.063 19:36:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:13.063 19:36:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.063 19:36:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.063 19:36:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.063 19:36:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.063 19:36:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:13.063 19:36:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.063 19:36:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.063 19:36:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.063 19:36:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:13.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:27:13.063 00:27:13.063 --- 10.0.0.2 ping statistics --- 00:27:13.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.063 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:27:13.063 19:36:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:27:13.322 00:27:13.322 --- 10.0.0.1 ping statistics --- 00:27:13.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.322 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:27:13.322 19:36:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.322 19:36:11 -- nvmf/common.sh@410 -- # return 0 00:27:13.322 19:36:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:13.322 19:36:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.322 19:36:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:13.322 19:36:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:13.322 19:36:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.322 19:36:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:13.322 19:36:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:13.322 19:36:11 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:13.322 19:36:11 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:13.322 19:36:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:13.322 19:36:11 -- common/autotest_common.sh@10 -- # set +x 00:27:13.322 19:36:11 -- host/fio.sh@24 -- # nvmfpid=1300998 00:27:13.322 19:36:11 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:13.322 19:36:11 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:13.322 19:36:11 -- host/fio.sh@28 -- # waitforlisten 1300998 00:27:13.322 19:36:11 -- common/autotest_common.sh@829 -- # '[' -z 1300998 ']' 00:27:13.322 19:36:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.322 19:36:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.322 19:36:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.322 19:36:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.322 19:36:11 -- common/autotest_common.sh@10 -- # set +x 00:27:13.322 [2024-11-17 19:36:11.403843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:13.322 [2024-11-17 19:36:11.403921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.322 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.322 [2024-11-17 19:36:11.472035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.322 [2024-11-17 19:36:11.561447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:13.322 [2024-11-17 19:36:11.561607] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.322 [2024-11-17 19:36:11.561628] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.322 [2024-11-17 19:36:11.561642] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.322 [2024-11-17 19:36:11.561707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.322 [2024-11-17 19:36:11.561760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.322 [2024-11-17 19:36:11.561878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.322 [2024-11-17 19:36:11.561880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.255 19:36:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.256 19:36:12 -- common/autotest_common.sh@862 -- # return 0 00:27:14.256 19:36:12 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:14.514 [2024-11-17 19:36:12.691386] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.514 19:36:12 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:14.514 19:36:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:14.514 19:36:12 -- common/autotest_common.sh@10 -- # set +x 00:27:14.514 19:36:12 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:14.772 Malloc1 00:27:14.772 19:36:13 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.029 19:36:13 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:15.287 19:36:13 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.544 [2024-11-17 19:36:13.749353] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.544 19:36:13 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.802 19:36:14 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:15.802 19:36:14 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:15.802 19:36:14 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:15.802 19:36:14 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:15.802 19:36:14 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.802 19:36:14 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:15.802 19:36:14 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.802 19:36:14 -- common/autotest_common.sh@1330 -- # shift 00:27:15.802 19:36:14 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:15.802 19:36:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.802 19:36:14 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.802 19:36:14 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:15.802 19:36:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:15.802 19:36:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:15.802 19:36:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:15.802 19:36:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.802 19:36:14 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.802 19:36:14 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:15.802 19:36:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:15.802 19:36:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:15.802 19:36:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:15.802 19:36:14 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:15.802 19:36:14 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.059 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:16.059 fio-3.35 00:27:16.059 Starting 1 thread 00:27:16.059 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.587 00:27:18.587 test: (groupid=0, jobs=1): err= 0: pid=1301420: Sun Nov 17 19:36:16 2024 00:27:18.587 read: IOPS=9591, BW=37.5MiB/s (39.3MB/s)(75.2MiB/2006msec) 00:27:18.587 slat (nsec): min=1996, max=111264, avg=2535.69, stdev=1581.40 00:27:18.587 clat (usec): min=2227, max=13045, avg=7326.06, stdev=578.96 00:27:18.587 lat (usec): min=2246, max=13048, avg=7328.60, stdev=578.89 00:27:18.587 clat percentiles (usec): 00:27:18.587 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:27:18.587 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:27:18.587 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:27:18.587 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[10814], 99.95th=[11600], 00:27:18.587 | 99.99th=[13042] 00:27:18.587 bw ( KiB/s): min=37216, max=39080, per=99.91%, avg=38334.00, stdev=832.75, samples=4 00:27:18.587 iops : min= 9304, max= 9770, avg=9583.50, stdev=208.19, samples=4 00:27:18.587 write: IOPS=9597, BW=37.5MiB/s (39.3MB/s)(75.2MiB/2006msec); 0 zone resets 00:27:18.587 slat (nsec): min=2090, max=89377, avg=2667.88, stdev=1434.44 00:27:18.587 clat (usec): min=958, max=10666, avg=5954.77, stdev=480.91 00:27:18.587 lat (usec): min=965, max=10669, avg=5957.44, stdev=480.86 00:27:18.587 clat percentiles (usec): 00:27:18.587 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:27:18.587 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:27:18.587 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:27:18.587 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 9110], 99.95th=[ 9634], 00:27:18.587 | 99.99th=[10552] 00:27:18.587 bw ( KiB/s): min=38032, max=38704, per=100.00%, avg=38400.00, stdev=287.70, samples=4 00:27:18.587 iops : min= 9508, max= 9676, avg=9600.00, stdev=71.93, samples=4 00:27:18.587 lat (usec) : 1000=0.01% 00:27:18.587 lat (msec) : 2=0.02%, 4=0.11%, 10=99.79%, 20=0.08% 00:27:18.587 cpu : usr=64.69%, sys=33.37%, ctx=54, majf=0, minf=36 00:27:18.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:18.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:18.587 issued rwts: total=19241,19253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:18.587 00:27:18.587 Run status group 0 (all jobs): 00:27:18.587 READ: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.2MiB (78.8MB), run=2006-2006msec 00:27:18.587 WRITE: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.2MiB (78.9MB), run=2006-2006msec 00:27:18.587 19:36:16 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:18.587 19:36:16 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:18.587 19:36:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:18.587 19:36:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.587 19:36:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:18.587 19:36:16 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.587 19:36:16 -- common/autotest_common.sh@1330 -- # shift 00:27:18.587 19:36:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:18.587 19:36:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.587 19:36:16 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.587 19:36:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:18.587 19:36:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:18.587 19:36:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:18.587 19:36:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:18.587 19:36:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.587 19:36:16 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.587 19:36:16 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:18.587 19:36:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:18.587 19:36:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:18.587 19:36:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:18.587 19:36:16 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:18.587 19:36:16 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:18.587 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:18.587 fio-3.35 00:27:18.587 Starting 1 thread 00:27:18.587 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.117 00:27:21.117 test: (groupid=0, jobs=1): err= 0: pid=1301885: Sun Nov 17 19:36:19 2024 00:27:21.117 read: IOPS=8951, BW=140MiB/s (147MB/s)(281MiB/2007msec) 00:27:21.117 slat (nsec): min=2909, max=99920, avg=3674.38, stdev=1811.40 00:27:21.117 clat (usec): min=2347, max=15737, avg=8274.56, stdev=1969.49 00:27:21.117 lat (usec): min=2351, max=15740, avg=8278.23, stdev=1969.59 00:27:21.117 clat percentiles (usec): 00:27:21.117 | 1.00th=[ 4424], 5.00th=[ 5145], 10.00th=[ 5735], 20.00th=[ 6521], 00:27:21.117 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8717], 00:27:21.117 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10683], 95.00th=[11338], 00:27:21.117 | 99.00th=[13435], 99.50th=[14091], 99.90th=[15008], 99.95th=[15401], 00:27:21.117 | 99.99th=[15664] 00:27:21.117 bw ( KiB/s): min=69856, max=76672, per=50.79%, avg=72736.00, stdev=3048.02, samples=4 00:27:21.117 iops : min= 4366, max= 4792, avg=4546.00, stdev=190.50, samples=4 00:27:21.117 write: IOPS=5373, BW=84.0MiB/s (88.0MB/s)(148MiB/1762msec); 0 zone resets 00:27:21.117 slat (usec): min=30, max=215, avg=33.38, stdev= 5.63 00:27:21.117 clat (usec): min=3412, max=18362, avg=10633.04, stdev=1810.51 00:27:21.117 lat (usec): min=3445, max=18393, avg=10666.41, stdev=1811.02 00:27:21.117 clat percentiles (usec): 00:27:21.117 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 9110], 00:27:21.117 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10421], 60.00th=[10945], 00:27:21.117 | 70.00th=[11469], 80.00th=[12125], 90.00th=[13042], 95.00th=[13829], 00:27:21.117 | 99.00th=[15401], 99.50th=[16188], 99.90th=[17957], 99.95th=[18220], 00:27:21.117 | 99.99th=[18482] 00:27:21.117 bw ( KiB/s): min=72928, max=79488, per=88.10%, avg=75744.00, stdev=2860.14, samples=4 00:27:21.117 iops : min= 4558, max= 4968, avg=4734.00, stdev=178.76, samples=4 00:27:21.117 lat (msec) : 4=0.36%, 10=65.41%, 20=34.23% 00:27:21.117 cpu : usr=79.26%, sys=19.59%, ctx=31, majf=0, minf=68 00:27:21.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:21.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:21.118 issued rwts: total=17965,9468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:21.118 00:27:21.118 Run status group 0 (all jobs): 00:27:21.118 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=281MiB (294MB), run=2007-2007msec 00:27:21.118 WRITE: bw=84.0MiB/s (88.0MB/s), 84.0MiB/s-84.0MiB/s (88.0MB/s-88.0MB/s), io=148MiB (155MB), run=1762-1762msec 00:27:21.118 19:36:19 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.375 19:36:19 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:21.375 19:36:19 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:21.375 19:36:19 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:21.375 19:36:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:27:21.375 19:36:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:27:21.375 19:36:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:21.375 19:36:19 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:21.375 19:36:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:27:21.375 19:36:19 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:27:21.375 19:36:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:88:00.0 00:27:21.375 19:36:19 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:24.654 Nvme0n1 00:27:24.654 19:36:22 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:27.934 19:36:25 -- host/fio.sh@53 -- # ls_guid=3922965c-67b2-444d-92d1-09249922c1db 00:27:27.934 19:36:25 -- host/fio.sh@54 -- # get_lvs_free_mb 3922965c-67b2-444d-92d1-09249922c1db 00:27:27.934 19:36:25 -- common/autotest_common.sh@1353 -- # local lvs_uuid=3922965c-67b2-444d-92d1-09249922c1db 00:27:27.934 19:36:25 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:27.934 19:36:25 -- common/autotest_common.sh@1355 -- # local fc 00:27:27.934 19:36:25 -- common/autotest_common.sh@1356 -- # local cs 00:27:27.934 19:36:25 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:27.934 19:36:25 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:27.934 { 00:27:27.934 "uuid": "3922965c-67b2-444d-92d1-09249922c1db", 00:27:27.934 "name": "lvs_0", 00:27:27.934 "base_bdev": "Nvme0n1", 00:27:27.934 "total_data_clusters": 930, 00:27:27.934 "free_clusters": 930, 00:27:27.934 "block_size": 512, 00:27:27.934 "cluster_size": 1073741824 00:27:27.934 } 00:27:27.934 ]' 00:27:27.935 19:36:25 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="3922965c-67b2-444d-92d1-09249922c1db") .free_clusters' 00:27:27.935 19:36:25 -- common/autotest_common.sh@1358 -- # fc=930 00:27:27.935 19:36:25 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="3922965c-67b2-444d-92d1-09249922c1db") .cluster_size' 00:27:27.935 19:36:25 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:27:27.935 19:36:25 -- common/autotest_common.sh@1362 -- # free_mb=952320 00:27:27.935 19:36:25 -- common/autotest_common.sh@1363 -- # echo 952320 00:27:27.935 952320 00:27:27.935 19:36:25 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:28.237 f9fd4311-e24e-4318-a918-8e50bfc2d9ec 00:27:28.237 19:36:26 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:28.514 19:36:26 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:28.514 19:36:26 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:28.773 19:36:26 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:28.773 19:36:26 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:28.773 19:36:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:28.773 19:36:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:28.773 19:36:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:28.773 19:36:26 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:28.773 19:36:26 -- common/autotest_common.sh@1330 -- # shift 00:27:28.773 19:36:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:28.773 19:36:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:28.773 19:36:26 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:28.773 19:36:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:28.773 19:36:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:28.773 19:36:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:28.773 19:36:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:28.773 19:36:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:28.773 19:36:27 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:28.773 19:36:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:28.773 19:36:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:28.773 19:36:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:28.773 19:36:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:28.773 19:36:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:28.773 19:36:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:29.031 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:29.031 fio-3.35 00:27:29.031 Starting 1 thread 00:27:29.031 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.558 00:27:31.559 test: (groupid=0, jobs=1): err= 0: pid=1303204: Sun Nov 17 19:36:29 2024 00:27:31.559 read: IOPS=6365, BW=24.9MiB/s (26.1MB/s)(49.9MiB/2008msec) 00:27:31.559 slat (nsec): min=1884, max=164963, avg=2404.62, stdev=2052.55 00:27:31.559 clat (usec): min=750, max=170832, avg=10972.15, stdev=11348.38 00:27:31.559 lat (usec): min=753, max=170864, avg=10974.55, stdev=11348.64 00:27:31.559 clat percentiles (msec): 00:27:31.559 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:27:31.559 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:27:31.559 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:27:31.559 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:27:31.559 | 99.99th=[ 171] 00:27:31.559 bw ( KiB/s): min=17712, max=28104, per=99.85%, avg=25422.00, stdev=5141.11, samples=4 00:27:31.559 iops : min= 4428, max= 7026, avg=6355.50, stdev=1285.28, samples=4 00:27:31.559 write: IOPS=6366, BW=24.9MiB/s (26.1MB/s)(49.9MiB/2008msec); 0 zone resets 00:27:31.559 slat (nsec): min=1963, max=104330, avg=2477.06, stdev=1412.89 00:27:31.559 clat (usec): min=281, max=169099, avg=8964.46, stdev=10629.51 00:27:31.559 lat (usec): min=284, max=169104, avg=8966.93, stdev=10629.71 00:27:31.559 clat percentiles (msec): 00:27:31.559 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:27:31.559 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:27:31.559 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:27:31.559 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:27:31.559 | 99.99th=[ 169] 00:27:31.559 bw ( KiB/s): min=18728, max=27872, per=99.94%, avg=25450.00, stdev=4484.71, samples=4 00:27:31.559 iops : min= 4682, max= 6968, avg=6362.50, stdev=1121.18, samples=4 00:27:31.559 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:27:31.559 lat (msec) : 2=0.02%, 4=0.16%, 10=70.25%, 20=29.04%, 250=0.50% 00:27:31.559 cpu : usr=63.13%, sys=35.48%, ctx=86, majf=0, minf=36 00:27:31.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:31.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:31.559 issued rwts: total=12781,12783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:31.559 00:27:31.559 Run status group 0 (all jobs): 00:27:31.559 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=49.9MiB (52.3MB), run=2008-2008msec 00:27:31.559 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=49.9MiB (52.4MB), run=2008-2008msec 00:27:31.559 19:36:29 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:31.559 19:36:29 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:32.931 19:36:30 -- host/fio.sh@64 -- # ls_nested_guid=b1bbf73a-1c8b-4b96-8e4f-7de66771e607 00:27:32.931 19:36:30 -- host/fio.sh@65 -- # get_lvs_free_mb b1bbf73a-1c8b-4b96-8e4f-7de66771e607 00:27:32.931 19:36:30 -- common/autotest_common.sh@1353 -- # local lvs_uuid=b1bbf73a-1c8b-4b96-8e4f-7de66771e607 00:27:32.931 19:36:30 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:32.931 19:36:30 -- common/autotest_common.sh@1355 -- # local fc 00:27:32.931 19:36:30 -- common/autotest_common.sh@1356 -- # local cs 00:27:32.931 19:36:30 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:33.189 19:36:31 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:33.189 { 00:27:33.189 "uuid": "3922965c-67b2-444d-92d1-09249922c1db", 00:27:33.189 "name": "lvs_0", 00:27:33.189 "base_bdev": "Nvme0n1", 00:27:33.189 "total_data_clusters": 930, 00:27:33.189 "free_clusters": 0, 00:27:33.189 "block_size": 512, 00:27:33.189 "cluster_size": 1073741824 00:27:33.189 }, 00:27:33.189 { 00:27:33.189 "uuid": "b1bbf73a-1c8b-4b96-8e4f-7de66771e607", 00:27:33.189 "name": "lvs_n_0", 00:27:33.189 "base_bdev": "f9fd4311-e24e-4318-a918-8e50bfc2d9ec", 00:27:33.189 "total_data_clusters": 237847, 00:27:33.189 "free_clusters": 237847, 00:27:33.189 "block_size": 512, 00:27:33.190 "cluster_size": 4194304 00:27:33.190 } 00:27:33.190 ]' 00:27:33.190 19:36:31 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="b1bbf73a-1c8b-4b96-8e4f-7de66771e607") .free_clusters' 00:27:33.190 19:36:31 -- common/autotest_common.sh@1358 -- # fc=237847 00:27:33.190 19:36:31 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="b1bbf73a-1c8b-4b96-8e4f-7de66771e607") .cluster_size' 00:27:33.190 19:36:31 -- common/autotest_common.sh@1359 -- # cs=4194304 00:27:33.190 19:36:31 -- common/autotest_common.sh@1362 -- # free_mb=951388 00:27:33.190 19:36:31 -- common/autotest_common.sh@1363 -- # echo 951388 00:27:33.190 951388 00:27:33.190 19:36:31 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:33.755 57140215-6349-47d7-bd06-32734a0e59b7 00:27:33.755 19:36:31 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:34.013 19:36:32 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:34.271 19:36:32 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:34.529 19:36:32 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:34.529 19:36:32 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:34.529 19:36:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:34.529 19:36:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:34.529 19:36:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:34.529 19:36:32 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:34.529 19:36:32 -- common/autotest_common.sh@1330 -- # shift 00:27:34.529 19:36:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:34.529 19:36:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:34.529 19:36:32 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:34.529 19:36:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:34.529 19:36:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:34.529 19:36:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:34.529 19:36:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:34.529 19:36:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:34.529 19:36:32 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:34.529 19:36:32 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:34.529 19:36:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:34.529 19:36:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:34.529 19:36:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:34.529 19:36:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:34.529 19:36:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:34.786 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:34.786 fio-3.35 00:27:34.786 Starting 1 thread 00:27:34.786 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.312 00:27:37.312 test: (groupid=0, jobs=1): err= 0: pid=1303962: Sun Nov 17 19:36:35 2024 00:27:37.312 read: IOPS=6235, BW=24.4MiB/s (25.5MB/s)(48.9MiB/2008msec) 00:27:37.312 slat (nsec): min=1816, max=155085, avg=2345.66, stdev=2068.74 00:27:37.312 clat (usec): min=4324, max=18159, avg=11329.95, stdev=966.33 00:27:37.312 lat (usec): min=4336, max=18162, avg=11332.29, stdev=966.23 00:27:37.312 clat percentiles (usec): 00:27:37.312 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:27:37.312 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:27:37.312 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:27:37.312 | 99.00th=[13566], 99.50th=[13829], 99.90th=[16450], 99.95th=[16909], 00:27:37.312 | 99.99th=[17957] 00:27:37.312 bw ( KiB/s): min=23744, max=25344, per=99.81%, avg=24896.00, stdev=770.33, samples=4 00:27:37.312 iops : min= 5936, max= 6336, avg=6224.00, stdev=192.58, samples=4 00:27:37.312 write: IOPS=6227, BW=24.3MiB/s (25.5MB/s)(48.8MiB/2008msec); 0 zone resets 00:27:37.312 slat (nsec): min=1938, max=124456, avg=2488.12, stdev=1687.47 00:27:37.312 clat (usec): min=2135, max=16786, avg=9104.43, stdev=843.83 00:27:37.312 lat (usec): min=2142, max=16788, avg=9106.92, stdev=843.78 00:27:37.312 clat percentiles (usec): 00:27:37.312 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8455], 00:27:37.312 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:27:37.312 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:27:37.312 | 99.00th=[10945], 99.50th=[11207], 99.90th=[15008], 99.95th=[16057], 00:27:37.312 | 99.99th=[16712] 00:27:37.312 bw ( KiB/s): min=24768, max=25152, per=99.97%, avg=24902.00, stdev=175.68, samples=4 00:27:37.312 iops : min= 6192, max= 6288, avg=6225.50, stdev=43.92, samples=4 00:27:37.312 lat (msec) : 4=0.04%, 10=47.31%, 20=52.64% 00:27:37.312 cpu : usr=59.99%, sys=38.61%, ctx=110, majf=0, minf=36 00:27:37.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:37.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:37.312 issued rwts: total=12521,12505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:37.312 00:27:37.312 Run status group 0 (all jobs): 00:27:37.312 READ: bw=24.4MiB/s (25.5MB/s), 24.4MiB/s-24.4MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.3MB), run=2008-2008msec 00:27:37.312 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.8MiB (51.2MB), run=2008-2008msec 00:27:37.312 19:36:35 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:37.570 19:36:35 -- host/fio.sh@74 -- # sync 00:27:37.570 19:36:35 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:41.750 19:36:39 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:41.750 19:36:39 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:45.027 19:36:42 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:45.027 19:36:42 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:46.927 19:36:44 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:46.927 19:36:44 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:46.927 19:36:44 -- host/fio.sh@86 -- # nvmftestfini 00:27:46.927 19:36:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:46.927 19:36:44 -- nvmf/common.sh@116 -- # sync 00:27:46.927 19:36:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:46.927 19:36:44 -- nvmf/common.sh@119 -- # set +e 00:27:46.927 19:36:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:46.927 19:36:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:46.927 rmmod nvme_tcp 00:27:46.927 rmmod nvme_fabrics 00:27:46.927 rmmod nvme_keyring 00:27:46.927 19:36:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:46.927 19:36:44 -- nvmf/common.sh@123 -- # set -e 00:27:46.927 19:36:44 -- nvmf/common.sh@124 -- # return 0 00:27:46.927 19:36:44 -- nvmf/common.sh@477 -- # '[' -n 1300998 ']' 00:27:46.927 19:36:44 -- nvmf/common.sh@478 -- # killprocess 1300998 00:27:46.927 19:36:44 -- common/autotest_common.sh@936 -- # '[' -z 1300998 ']' 00:27:46.927 19:36:44 -- common/autotest_common.sh@940 -- # kill -0 1300998 00:27:46.927 19:36:44 -- common/autotest_common.sh@941 -- # uname 00:27:46.927 19:36:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:46.927 19:36:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1300998 00:27:46.927 19:36:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:46.927 19:36:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:46.927 19:36:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1300998' 00:27:46.927 killing process with pid 1300998 00:27:46.927 19:36:44 -- common/autotest_common.sh@955 -- # kill 1300998 00:27:46.927 19:36:44 -- common/autotest_common.sh@960 -- # wait 1300998 00:27:46.927 19:36:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:46.927 19:36:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:46.927 19:36:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:46.927 19:36:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.927 19:36:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:46.927 19:36:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.927 19:36:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.927 19:36:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.828 19:36:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:48.828 00:27:48.828 real 0m37.949s 00:27:48.828 user 2m26.490s 00:27:48.828 sys 0m6.626s 00:27:48.828 19:36:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:48.828 19:36:47 -- common/autotest_common.sh@10 -- # set +x 00:27:48.828 ************************************ 00:27:48.828 END TEST nvmf_fio_host 00:27:48.828 ************************************ 00:27:48.828 19:36:47 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:48.828 19:36:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:48.828 19:36:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:48.828 19:36:47 -- common/autotest_common.sh@10 -- # set +x 00:27:49.086 ************************************ 00:27:49.086 START TEST nvmf_failover 00:27:49.086 ************************************ 00:27:49.086 19:36:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:49.086 * Looking for test storage... 00:27:49.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:49.086 19:36:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:49.086 19:36:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:49.086 19:36:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:49.086 19:36:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:49.086 19:36:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:49.086 19:36:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:49.086 19:36:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:49.086 19:36:47 -- scripts/common.sh@335 -- # IFS=.-: 00:27:49.086 19:36:47 -- scripts/common.sh@335 -- # read -ra ver1 00:27:49.086 19:36:47 -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.086 19:36:47 -- scripts/common.sh@336 -- # read -ra ver2 00:27:49.086 19:36:47 -- scripts/common.sh@337 -- # local 'op=<' 00:27:49.086 19:36:47 -- scripts/common.sh@339 -- # ver1_l=2 00:27:49.086 19:36:47 -- scripts/common.sh@340 -- # ver2_l=1 00:27:49.086 19:36:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:49.086 19:36:47 -- scripts/common.sh@343 -- # case "$op" in 00:27:49.086 19:36:47 -- scripts/common.sh@344 -- # : 1 00:27:49.086 19:36:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:49.086 19:36:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.086 19:36:47 -- scripts/common.sh@364 -- # decimal 1 00:27:49.086 19:36:47 -- scripts/common.sh@352 -- # local d=1 00:27:49.086 19:36:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.086 19:36:47 -- scripts/common.sh@354 -- # echo 1 00:27:49.086 19:36:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:49.086 19:36:47 -- scripts/common.sh@365 -- # decimal 2 00:27:49.086 19:36:47 -- scripts/common.sh@352 -- # local d=2 00:27:49.086 19:36:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.086 19:36:47 -- scripts/common.sh@354 -- # echo 2 00:27:49.086 19:36:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:49.086 19:36:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:49.086 19:36:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:49.086 19:36:47 -- scripts/common.sh@367 -- # return 0 00:27:49.086 19:36:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.086 19:36:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:49.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.086 --rc genhtml_branch_coverage=1 00:27:49.086 --rc genhtml_function_coverage=1 00:27:49.086 --rc genhtml_legend=1 00:27:49.086 --rc geninfo_all_blocks=1 00:27:49.086 --rc geninfo_unexecuted_blocks=1 00:27:49.086 00:27:49.086 ' 00:27:49.086 19:36:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:49.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.086 --rc genhtml_branch_coverage=1 00:27:49.086 --rc genhtml_function_coverage=1 00:27:49.086 --rc genhtml_legend=1 00:27:49.087 --rc geninfo_all_blocks=1 00:27:49.087 --rc geninfo_unexecuted_blocks=1 00:27:49.087 00:27:49.087 ' 00:27:49.087 19:36:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:49.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.087 --rc genhtml_branch_coverage=1 00:27:49.087 --rc genhtml_function_coverage=1 00:27:49.087 --rc genhtml_legend=1 00:27:49.087 --rc geninfo_all_blocks=1 00:27:49.087 --rc geninfo_unexecuted_blocks=1 00:27:49.087 00:27:49.087 ' 00:27:49.087 19:36:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:49.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.087 --rc genhtml_branch_coverage=1 00:27:49.087 --rc genhtml_function_coverage=1 00:27:49.087 --rc genhtml_legend=1 00:27:49.087 --rc geninfo_all_blocks=1 00:27:49.087 --rc geninfo_unexecuted_blocks=1 00:27:49.087 00:27:49.087 ' 00:27:49.087 19:36:47 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.087 19:36:47 -- nvmf/common.sh@7 -- # uname -s 00:27:49.087 19:36:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.087 19:36:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.087 19:36:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.087 19:36:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.087 19:36:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.087 19:36:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.087 19:36:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.087 19:36:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.087 19:36:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.087 19:36:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.087 19:36:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.087 19:36:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.087 19:36:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.087 19:36:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.087 19:36:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.087 19:36:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.087 19:36:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.087 19:36:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.087 19:36:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.087 19:36:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.087 19:36:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.087 19:36:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.087 19:36:47 -- paths/export.sh@5 -- # export PATH 00:27:49.087 19:36:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.087 19:36:47 -- nvmf/common.sh@46 -- # : 0 00:27:49.087 19:36:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:49.087 19:36:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:49.087 19:36:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:49.087 19:36:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.087 19:36:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.087 19:36:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:49.087 19:36:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:49.087 19:36:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:49.087 19:36:47 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:49.087 19:36:47 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:49.087 19:36:47 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:49.087 19:36:47 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:49.087 19:36:47 -- host/failover.sh@18 -- # nvmftestinit 00:27:49.087 19:36:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:49.087 19:36:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.087 19:36:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:49.087 19:36:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:49.087 19:36:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:49.087 19:36:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.087 19:36:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.087 19:36:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.087 19:36:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:49.087 19:36:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:49.087 19:36:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:49.087 19:36:47 -- common/autotest_common.sh@10 -- # set +x 00:27:51.616 19:36:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:51.616 19:36:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:51.616 19:36:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:51.616 19:36:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:51.616 19:36:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:51.616 19:36:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:51.616 19:36:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:51.616 19:36:49 -- nvmf/common.sh@294 -- # net_devs=() 00:27:51.616 19:36:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:51.616 19:36:49 -- nvmf/common.sh@295 -- # e810=() 00:27:51.616 19:36:49 -- nvmf/common.sh@295 -- # local -ga e810 00:27:51.616 19:36:49 -- nvmf/common.sh@296 -- # x722=() 00:27:51.616 19:36:49 -- nvmf/common.sh@296 -- # local -ga x722 00:27:51.616 19:36:49 -- nvmf/common.sh@297 -- # mlx=() 00:27:51.616 19:36:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:51.616 19:36:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.616 19:36:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:51.616 19:36:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:51.616 19:36:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:51.616 19:36:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:51.616 19:36:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:51.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:51.616 19:36:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:51.616 19:36:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:51.616 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:51.616 19:36:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:51.616 19:36:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:51.616 19:36:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.616 19:36:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:51.616 19:36:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.616 19:36:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:51.616 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:51.616 19:36:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.616 19:36:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:51.616 19:36:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.616 19:36:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:51.616 19:36:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.616 19:36:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:51.616 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:51.616 19:36:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.616 19:36:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:51.616 19:36:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:51.616 19:36:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:51.616 19:36:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.616 19:36:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.616 19:36:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.616 19:36:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:51.616 19:36:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.616 19:36:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.616 19:36:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:51.616 19:36:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.616 19:36:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.616 19:36:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:51.616 19:36:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:51.616 19:36:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.616 19:36:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.616 19:36:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.616 19:36:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.616 19:36:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:51.616 19:36:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.616 19:36:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.616 19:36:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.616 19:36:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:51.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:27:51.616 00:27:51.616 --- 10.0.0.2 ping statistics --- 00:27:51.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.616 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:27:51.616 19:36:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:27:51.616 00:27:51.616 --- 10.0.0.1 ping statistics --- 00:27:51.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.616 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:51.616 19:36:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.616 19:36:49 -- nvmf/common.sh@410 -- # return 0 00:27:51.616 19:36:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:51.616 19:36:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.616 19:36:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:51.616 19:36:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.616 19:36:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:51.616 19:36:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:51.616 19:36:49 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:51.616 19:36:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:51.616 19:36:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:51.616 19:36:49 -- common/autotest_common.sh@10 -- # set +x 00:27:51.616 19:36:49 -- nvmf/common.sh@469 -- # nvmfpid=1307274 00:27:51.616 19:36:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:51.616 19:36:49 -- nvmf/common.sh@470 -- # waitforlisten 1307274 00:27:51.616 19:36:49 -- common/autotest_common.sh@829 -- # '[' -z 1307274 ']' 00:27:51.616 19:36:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.616 19:36:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.616 19:36:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.616 19:36:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.616 19:36:49 -- common/autotest_common.sh@10 -- # set +x 00:27:51.616 [2024-11-17 19:36:49.468044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:51.617 [2024-11-17 19:36:49.468117] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.617 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.617 [2024-11-17 19:36:49.534264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:51.617 [2024-11-17 19:36:49.620880] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:51.617 [2024-11-17 19:36:49.621027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.617 [2024-11-17 19:36:49.621043] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.617 [2024-11-17 19:36:49.621055] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.617 [2024-11-17 19:36:49.621160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.617 [2024-11-17 19:36:49.621185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.617 [2024-11-17 19:36:49.621187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.549 19:36:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.549 19:36:50 -- common/autotest_common.sh@862 -- # return 0 00:27:52.549 19:36:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:52.549 19:36:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:52.549 19:36:50 -- common/autotest_common.sh@10 -- # set +x 00:27:52.549 19:36:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.549 19:36:50 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:52.549 [2024-11-17 19:36:50.734357] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.549 19:36:50 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:52.808 Malloc0 00:27:52.808 19:36:51 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.066 19:36:51 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:53.323 19:36:51 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.581 [2024-11-17 19:36:51.761697] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.581 19:36:51 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:53.839 [2024-11-17 19:36:52.022408] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:53.839 19:36:52 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:54.096 [2024-11-17 19:36:52.271279] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:54.096 19:36:52 -- host/failover.sh@31 -- # bdevperf_pid=1307701 00:27:54.096 19:36:52 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:54.096 19:36:52 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:54.096 19:36:52 -- host/failover.sh@34 -- # waitforlisten 1307701 /var/tmp/bdevperf.sock 00:27:54.096 19:36:52 -- common/autotest_common.sh@829 -- # '[' -z 1307701 ']' 00:27:54.096 19:36:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:54.096 19:36:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.096 19:36:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:54.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:54.096 19:36:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.096 19:36:52 -- common/autotest_common.sh@10 -- # set +x 00:27:55.027 19:36:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.027 19:36:53 -- common/autotest_common.sh@862 -- # return 0 00:27:55.027 19:36:53 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.592 NVMe0n1 00:27:55.592 19:36:53 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.849 00:27:55.849 19:36:54 -- host/failover.sh@39 -- # run_test_pid=1307942 00:27:55.849 19:36:54 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:55.849 19:36:54 -- host/failover.sh@41 -- # sleep 1 00:27:57.221 19:36:55 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.221 [2024-11-17 19:36:55.355694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.355997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.221 [2024-11-17 19:36:55.356272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.222 [2024-11-17 19:36:55.356283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.222 [2024-11-17 19:36:55.356295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.222 [2024-11-17 19:36:55.356307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.222 [2024-11-17 19:36:55.356318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.222 [2024-11-17 19:36:55.356330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.222 [2024-11-17 19:36:55.356341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.222 [2024-11-17 19:36:55.356352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67d180 is same with the state(5) to be set 00:27:57.222 19:36:55 -- host/failover.sh@45 -- # sleep 3 00:28:00.503 19:36:58 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.761 00:28:00.761 19:36:58 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:01.019 [2024-11-17 19:36:59.126601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.019 [2024-11-17 19:36:59.126811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.020 [2024-11-17 19:36:59.126822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.020 [2024-11-17 19:36:59.126833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.020 [2024-11-17 19:36:59.126846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.020 [2024-11-17 19:36:59.126858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.020 [2024-11-17 19:36:59.126870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.020 [2024-11-17 19:36:59.126881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.020 [2024-11-17 19:36:59.126893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67e530 is same with the state(5) to be set 00:28:01.020 19:36:59 -- host/failover.sh@50 -- # sleep 3 00:28:04.360 19:37:02 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.360 [2024-11-17 19:37:02.437177] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.360 19:37:02 -- host/failover.sh@55 -- # sleep 1 00:28:05.293 19:37:03 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:05.551 [2024-11-17 19:37:03.744377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.551 [2024-11-17 19:37:03.744604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.744990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.745002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.745014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.745025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.745037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.745048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 [2024-11-17 19:37:03.745060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8345e0 is same with the state(5) to be set 00:28:05.552 19:37:03 -- host/failover.sh@59 -- # wait 1307942 00:28:12.115 0 00:28:12.115 19:37:09 -- host/failover.sh@61 -- # killprocess 1307701 00:28:12.115 19:37:09 -- common/autotest_common.sh@936 -- # '[' -z 1307701 ']' 00:28:12.115 19:37:09 -- common/autotest_common.sh@940 -- # kill -0 1307701 00:28:12.115 19:37:09 -- common/autotest_common.sh@941 -- # uname 00:28:12.115 19:37:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:12.115 19:37:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1307701 00:28:12.115 19:37:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:12.115 19:37:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:12.115 19:37:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1307701' 00:28:12.115 killing process with pid 1307701 00:28:12.115 19:37:09 -- common/autotest_common.sh@955 -- # kill 1307701 00:28:12.115 19:37:09 -- common/autotest_common.sh@960 -- # wait 1307701 00:28:12.115 19:37:09 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:12.115 [2024-11-17 19:36:52.327510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:12.115 [2024-11-17 19:36:52.327611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307701 ] 00:28:12.115 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.115 [2024-11-17 19:36:52.387994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.115 [2024-11-17 19:36:52.471670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.115 Running I/O for 15 seconds... 00:28:12.115 [2024-11-17 19:36:55.356630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.356984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.356999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.357013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.357051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.357066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.357081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.357094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.357124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.357136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.115 [2024-11-17 19:36:55.357150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.115 [2024-11-17 19:36:55.357163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.116 [2024-11-17 19:36:55.357844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.116 [2024-11-17 19:36:55.357899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.357969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.357993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.358021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.358048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.358075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.358102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.358130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.358170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.116 [2024-11-17 19:36:55.358199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.116 [2024-11-17 19:36:55.358227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.358255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.116 [2024-11-17 19:36:55.358282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.116 [2024-11-17 19:36:55.358296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.116 [2024-11-17 19:36:55.358310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.358364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.358392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.358502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.358778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.358807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.358835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.358864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.358892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.358925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.358954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.358969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.359151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.359179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.359207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.359268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.359296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.359384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.359439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.117 [2024-11-17 19:36:55.359467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.117 [2024-11-17 19:36:55.359494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.117 [2024-11-17 19:36:55.359509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.118 [2024-11-17 19:36:55.359638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.118 [2024-11-17 19:36:55.359671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.359950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.359970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.118 [2024-11-17 19:36:55.360214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.118 [2024-11-17 19:36:55.360248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.118 [2024-11-17 19:36:55.360330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.118 [2024-11-17 19:36:55.360527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194cc70 is same with the state(5) to be set 00:28:12.118 [2024-11-17 19:36:55.360556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.118 [2024-11-17 19:36:55.360566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.118 [2024-11-17 19:36:55.360582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118264 len:8 PRP1 0x0 PRP2 0x0 00:28:12.118 [2024-11-17 19:36:55.360595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360651] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x194cc70 was disconnected and freed. reset controller. 00:28:12.118 [2024-11-17 19:36:55.360707] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:12.118 [2024-11-17 19:36:55.360745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.118 [2024-11-17 19:36:55.360764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.118 [2024-11-17 19:36:55.360794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.118 [2024-11-17 19:36:55.360807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.118 [2024-11-17 19:36:55.360820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:55.360834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.119 [2024-11-17 19:36:55.360847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:55.360859] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:12.119 [2024-11-17 19:36:55.360912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192d600 (9): Bad file descriptor 00:28:12.119 [2024-11-17 19:36:55.363224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:12.119 [2024-11-17 19:36:55.516259] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:12.119 [2024-11-17 19:36:59.126416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.119 [2024-11-17 19:36:59.126494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.126529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.119 [2024-11-17 19:36:59.126544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.126557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.119 [2024-11-17 19:36:59.126570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.126584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.119 [2024-11-17 19:36:59.126597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.126609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192d600 is same with the state(5) to be set 00:28:12.119 [2024-11-17 19:36:59.127035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.119 [2024-11-17 19:36:59.127745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.119 [2024-11-17 19:36:59.127773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.119 [2024-11-17 19:36:59.127830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.119 [2024-11-17 19:36:59.127858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.119 [2024-11-17 19:36:59.127873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.127886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.127901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.127914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.127928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.127942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.127974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.127988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.120 [2024-11-17 19:36:59.128846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.120 [2024-11-17 19:36:59.128927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.120 [2024-11-17 19:36:59.128942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.128970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.128994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.129933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.129975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.129988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.130018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.130031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.130045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.121 [2024-11-17 19:36:59.130057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.121 [2024-11-17 19:36:59.130072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.121 [2024-11-17 19:36:59.130084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.122 [2024-11-17 19:36:59.130110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.122 [2024-11-17 19:36:59.130163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.122 [2024-11-17 19:36:59.130215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.122 [2024-11-17 19:36:59.130241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.122 [2024-11-17 19:36:59.130303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.122 [2024-11-17 19:36:59.130485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:36:59.130761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.122 [2024-11-17 19:36:59.130808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.122 [2024-11-17 19:36:59.130821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42672 len:8 PRP1 0x0 PRP2 0x0 00:28:12.122 [2024-11-17 19:36:59.130835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:36:59.130891] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1939a90 was disconnected and freed. reset controller. 00:28:12.122 [2024-11-17 19:36:59.130910] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:12.122 [2024-11-17 19:36:59.130925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:12.122 [2024-11-17 19:36:59.133077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:12.122 [2024-11-17 19:36:59.133115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192d600 (9): Bad file descriptor 00:28:12.122 [2024-11-17 19:36:59.203443] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:12.122 [2024-11-17 19:37:03.745163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.122 [2024-11-17 19:37:03.745567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.122 [2024-11-17 19:37:03.745581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.745594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.745621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.745648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.745681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.745730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.745770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.745798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.745825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.745852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.745880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.745907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.745935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.745963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.745978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.123 [2024-11-17 19:37:03.746735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.123 [2024-11-17 19:37:03.746763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.123 [2024-11-17 19:37:03.746778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.746791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.746805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.746818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.746832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.746849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.746864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.746878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.746892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.746916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.746930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.746944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.746958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.746971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.746986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.746999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.124 [2024-11-17 19:37:03.747480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.124 [2024-11-17 19:37:03.747792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.124 [2024-11-17 19:37:03.747805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.747820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.747835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.747851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.747865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.747880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.747893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.747908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.747922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.747937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.747951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.747965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.747998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.125 [2024-11-17 19:37:03.748312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.125 [2024-11-17 19:37:03.748423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.125 [2024-11-17 19:37:03.748510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.125 [2024-11-17 19:37:03.748538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.125 [2024-11-17 19:37:03.748619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.125 [2024-11-17 19:37:03.748646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.125 [2024-11-17 19:37:03.748760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.748952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.125 [2024-11-17 19:37:03.748971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.125 [2024-11-17 19:37:03.749000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.126 [2024-11-17 19:37:03.749014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194ec50 is same with the state(5) to be set 00:28:12.126 [2024-11-17 19:37:03.749029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.126 [2024-11-17 19:37:03.749040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.126 [2024-11-17 19:37:03.749056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25272 len:8 PRP1 0x0 PRP2 0x0 00:28:12.126 [2024-11-17 19:37:03.749068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.126 [2024-11-17 19:37:03.749128] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x194ec50 was disconnected and freed. reset controller. 00:28:12.126 [2024-11-17 19:37:03.749147] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:12.126 [2024-11-17 19:37:03.749194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.126 [2024-11-17 19:37:03.749218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.126 [2024-11-17 19:37:03.749233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.126 [2024-11-17 19:37:03.749260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.126 [2024-11-17 19:37:03.749275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.126 [2024-11-17 19:37:03.749287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.126 [2024-11-17 19:37:03.749302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.126 [2024-11-17 19:37:03.749314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.126 [2024-11-17 19:37:03.749327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:12.126 [2024-11-17 19:37:03.751322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:12.126 [2024-11-17 19:37:03.751362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192d600 (9): Bad file descriptor 00:28:12.126 [2024-11-17 19:37:03.824109] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:12.126 00:28:12.126 Latency(us) 00:28:12.126 [2024-11-17T18:37:10.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.126 [2024-11-17T18:37:10.393Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:12.126 Verification LBA range: start 0x0 length 0x4000 00:28:12.126 NVMe0n1 : 15.01 12289.35 48.01 1171.45 0.00 9491.81 606.81 14951.92 00:28:12.126 [2024-11-17T18:37:10.393Z] =================================================================================================================== 00:28:12.126 [2024-11-17T18:37:10.393Z] Total : 12289.35 48.01 1171.45 0.00 9491.81 606.81 14951.92 00:28:12.126 Received shutdown signal, test time was about 15.000000 seconds 00:28:12.126 00:28:12.126 Latency(us) 00:28:12.126 [2024-11-17T18:37:10.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.126 [2024-11-17T18:37:10.393Z] =================================================================================================================== 00:28:12.126 [2024-11-17T18:37:10.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.126 19:37:09 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:12.126 19:37:09 -- host/failover.sh@65 -- # count=3 00:28:12.126 19:37:09 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:12.126 19:37:09 -- host/failover.sh@73 -- # bdevperf_pid=1309747 00:28:12.126 19:37:09 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:12.126 19:37:09 -- host/failover.sh@75 -- # waitforlisten 1309747 /var/tmp/bdevperf.sock 00:28:12.126 19:37:09 -- common/autotest_common.sh@829 -- # '[' -z 1309747 ']' 00:28:12.126 19:37:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:12.126 19:37:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.126 19:37:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:12.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:12.126 19:37:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.126 19:37:09 -- common/autotest_common.sh@10 -- # set +x 00:28:12.384 19:37:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:12.384 19:37:10 -- common/autotest_common.sh@862 -- # return 0 00:28:12.384 19:37:10 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:12.641 [2024-11-17 19:37:10.784777] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:12.641 19:37:10 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:12.899 [2024-11-17 19:37:11.057565] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:12.899 19:37:11 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:13.464 NVMe0n1 00:28:13.464 19:37:11 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:13.721 00:28:13.721 19:37:11 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:13.979 00:28:13.979 19:37:12 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:13.979 19:37:12 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:14.236 19:37:12 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:14.495 19:37:12 -- host/failover.sh@87 -- # sleep 3 00:28:17.786 19:37:15 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:17.786 19:37:15 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:17.786 19:37:15 -- host/failover.sh@90 -- # run_test_pid=1310569 00:28:17.786 19:37:15 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:17.786 19:37:15 -- host/failover.sh@92 -- # wait 1310569 00:28:19.163 0 00:28:19.163 19:37:17 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:19.163 [2024-11-17 19:37:09.586330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:19.163 [2024-11-17 19:37:09.586449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309747 ] 00:28:19.163 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.163 [2024-11-17 19:37:09.645739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.163 [2024-11-17 19:37:09.726627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.163 [2024-11-17 19:37:12.658968] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:19.163 [2024-11-17 19:37:12.659075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.163 [2024-11-17 19:37:12.659098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.163 [2024-11-17 19:37:12.659115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.163 [2024-11-17 19:37:12.659142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.163 [2024-11-17 19:37:12.659156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.163 [2024-11-17 19:37:12.659169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.164 [2024-11-17 19:37:12.659184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.164 [2024-11-17 19:37:12.659197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.164 [2024-11-17 19:37:12.659212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.164 [2024-11-17 19:37:12.659251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.164 [2024-11-17 19:37:12.659282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1651600 (9): Bad file descriptor 00:28:19.164 [2024-11-17 19:37:12.709838] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:19.164 Running I/O for 1 seconds... 00:28:19.164 00:28:19.164 Latency(us) 00:28:19.164 [2024-11-17T18:37:17.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.164 [2024-11-17T18:37:17.431Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:19.164 Verification LBA range: start 0x0 length 0x4000 00:28:19.164 NVMe0n1 : 1.01 12958.04 50.62 0.00 0.00 9833.34 1231.83 11213.94 00:28:19.164 [2024-11-17T18:37:17.431Z] =================================================================================================================== 00:28:19.164 [2024-11-17T18:37:17.431Z] Total : 12958.04 50.62 0.00 0.00 9833.34 1231.83 11213.94 00:28:19.164 19:37:17 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:19.164 19:37:17 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:19.164 19:37:17 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:19.421 19:37:17 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:19.421 19:37:17 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:19.679 19:37:17 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:19.938 19:37:18 -- host/failover.sh@101 -- # sleep 3 00:28:23.225 19:37:21 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:23.225 19:37:21 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:23.225 19:37:21 -- host/failover.sh@108 -- # killprocess 1309747 00:28:23.225 19:37:21 -- common/autotest_common.sh@936 -- # '[' -z 1309747 ']' 00:28:23.225 19:37:21 -- common/autotest_common.sh@940 -- # kill -0 1309747 00:28:23.225 19:37:21 -- common/autotest_common.sh@941 -- # uname 00:28:23.225 19:37:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:23.225 19:37:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1309747 00:28:23.225 19:37:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:23.225 19:37:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:23.225 19:37:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1309747' 00:28:23.225 killing process with pid 1309747 00:28:23.225 19:37:21 -- common/autotest_common.sh@955 -- # kill 1309747 00:28:23.225 19:37:21 -- common/autotest_common.sh@960 -- # wait 1309747 00:28:23.484 19:37:21 -- host/failover.sh@110 -- # sync 00:28:23.484 19:37:21 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.744 19:37:21 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:23.744 19:37:21 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:23.744 19:37:21 -- host/failover.sh@116 -- # nvmftestfini 00:28:23.744 19:37:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:23.744 19:37:21 -- nvmf/common.sh@116 -- # sync 00:28:23.744 19:37:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:23.744 19:37:21 -- nvmf/common.sh@119 -- # set +e 00:28:23.744 19:37:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:23.744 19:37:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:23.744 rmmod nvme_tcp 00:28:23.744 rmmod nvme_fabrics 00:28:23.744 rmmod nvme_keyring 00:28:23.744 19:37:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:23.744 19:37:21 -- nvmf/common.sh@123 -- # set -e 00:28:23.744 19:37:21 -- nvmf/common.sh@124 -- # return 0 00:28:23.744 19:37:21 -- nvmf/common.sh@477 -- # '[' -n 1307274 ']' 00:28:23.744 19:37:21 -- nvmf/common.sh@478 -- # killprocess 1307274 00:28:23.744 19:37:21 -- common/autotest_common.sh@936 -- # '[' -z 1307274 ']' 00:28:23.744 19:37:21 -- common/autotest_common.sh@940 -- # kill -0 1307274 00:28:23.744 19:37:21 -- common/autotest_common.sh@941 -- # uname 00:28:23.744 19:37:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:23.744 19:37:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1307274 00:28:23.744 19:37:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:23.744 19:37:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:23.744 19:37:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1307274' 00:28:23.744 killing process with pid 1307274 00:28:23.744 19:37:21 -- common/autotest_common.sh@955 -- # kill 1307274 00:28:23.744 19:37:21 -- common/autotest_common.sh@960 -- # wait 1307274 00:28:24.004 19:37:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:24.004 19:37:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:24.004 19:37:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:24.004 19:37:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.004 19:37:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:24.004 19:37:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.004 19:37:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.004 19:37:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.545 19:37:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:26.545 00:28:26.545 real 0m37.166s 00:28:26.545 user 2m11.126s 00:28:26.545 sys 0m6.180s 00:28:26.545 19:37:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:26.545 19:37:24 -- common/autotest_common.sh@10 -- # set +x 00:28:26.545 ************************************ 00:28:26.545 END TEST nvmf_failover 00:28:26.545 ************************************ 00:28:26.545 19:37:24 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:26.545 19:37:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:26.545 19:37:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:26.545 19:37:24 -- common/autotest_common.sh@10 -- # set +x 00:28:26.545 ************************************ 00:28:26.545 START TEST nvmf_discovery 00:28:26.545 ************************************ 00:28:26.545 19:37:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:26.545 * Looking for test storage... 00:28:26.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.545 19:37:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:26.545 19:37:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:26.545 19:37:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:26.545 19:37:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:26.545 19:37:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:26.545 19:37:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:26.545 19:37:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:26.545 19:37:24 -- scripts/common.sh@335 -- # IFS=.-: 00:28:26.545 19:37:24 -- scripts/common.sh@335 -- # read -ra ver1 00:28:26.545 19:37:24 -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.545 19:37:24 -- scripts/common.sh@336 -- # read -ra ver2 00:28:26.545 19:37:24 -- scripts/common.sh@337 -- # local 'op=<' 00:28:26.545 19:37:24 -- scripts/common.sh@339 -- # ver1_l=2 00:28:26.545 19:37:24 -- scripts/common.sh@340 -- # ver2_l=1 00:28:26.545 19:37:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:26.545 19:37:24 -- scripts/common.sh@343 -- # case "$op" in 00:28:26.545 19:37:24 -- scripts/common.sh@344 -- # : 1 00:28:26.545 19:37:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:26.545 19:37:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.545 19:37:24 -- scripts/common.sh@364 -- # decimal 1 00:28:26.545 19:37:24 -- scripts/common.sh@352 -- # local d=1 00:28:26.545 19:37:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.545 19:37:24 -- scripts/common.sh@354 -- # echo 1 00:28:26.545 19:37:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:26.545 19:37:24 -- scripts/common.sh@365 -- # decimal 2 00:28:26.545 19:37:24 -- scripts/common.sh@352 -- # local d=2 00:28:26.545 19:37:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.545 19:37:24 -- scripts/common.sh@354 -- # echo 2 00:28:26.545 19:37:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:26.545 19:37:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:26.545 19:37:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:26.545 19:37:24 -- scripts/common.sh@367 -- # return 0 00:28:26.545 19:37:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.545 19:37:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:26.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.545 --rc genhtml_branch_coverage=1 00:28:26.545 --rc genhtml_function_coverage=1 00:28:26.545 --rc genhtml_legend=1 00:28:26.545 --rc geninfo_all_blocks=1 00:28:26.545 --rc geninfo_unexecuted_blocks=1 00:28:26.545 00:28:26.545 ' 00:28:26.545 19:37:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:26.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.545 --rc genhtml_branch_coverage=1 00:28:26.545 --rc genhtml_function_coverage=1 00:28:26.545 --rc genhtml_legend=1 00:28:26.545 --rc geninfo_all_blocks=1 00:28:26.545 --rc geninfo_unexecuted_blocks=1 00:28:26.545 00:28:26.545 ' 00:28:26.545 19:37:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:26.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.545 --rc genhtml_branch_coverage=1 00:28:26.545 --rc genhtml_function_coverage=1 00:28:26.545 --rc genhtml_legend=1 00:28:26.545 --rc geninfo_all_blocks=1 00:28:26.545 --rc geninfo_unexecuted_blocks=1 00:28:26.545 00:28:26.545 ' 00:28:26.545 19:37:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:26.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.545 --rc genhtml_branch_coverage=1 00:28:26.545 --rc genhtml_function_coverage=1 00:28:26.545 --rc genhtml_legend=1 00:28:26.545 --rc geninfo_all_blocks=1 00:28:26.545 --rc geninfo_unexecuted_blocks=1 00:28:26.545 00:28:26.545 ' 00:28:26.546 19:37:24 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.546 19:37:24 -- nvmf/common.sh@7 -- # uname -s 00:28:26.546 19:37:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.546 19:37:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.546 19:37:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.546 19:37:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.546 19:37:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.546 19:37:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.546 19:37:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.546 19:37:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.546 19:37:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.546 19:37:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.546 19:37:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.546 19:37:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.546 19:37:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.546 19:37:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.546 19:37:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.546 19:37:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.546 19:37:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.546 19:37:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.546 19:37:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.546 19:37:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.546 19:37:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.546 19:37:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.546 19:37:24 -- paths/export.sh@5 -- # export PATH 00:28:26.546 19:37:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.546 19:37:24 -- nvmf/common.sh@46 -- # : 0 00:28:26.546 19:37:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:26.546 19:37:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:26.546 19:37:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:26.546 19:37:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.546 19:37:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.546 19:37:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:26.546 19:37:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:26.546 19:37:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:26.546 19:37:24 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:26.546 19:37:24 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:26.546 19:37:24 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:26.546 19:37:24 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:26.546 19:37:24 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:26.546 19:37:24 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:26.546 19:37:24 -- host/discovery.sh@25 -- # nvmftestinit 00:28:26.546 19:37:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:26.546 19:37:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.546 19:37:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:26.546 19:37:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:26.546 19:37:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:26.546 19:37:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.546 19:37:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.546 19:37:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.546 19:37:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:26.546 19:37:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:26.546 19:37:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:26.546 19:37:24 -- common/autotest_common.sh@10 -- # set +x 00:28:28.452 19:37:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:28.452 19:37:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:28.452 19:37:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:28.452 19:37:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:28.452 19:37:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:28.452 19:37:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:28.452 19:37:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:28.452 19:37:26 -- nvmf/common.sh@294 -- # net_devs=() 00:28:28.452 19:37:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:28.452 19:37:26 -- nvmf/common.sh@295 -- # e810=() 00:28:28.452 19:37:26 -- nvmf/common.sh@295 -- # local -ga e810 00:28:28.452 19:37:26 -- nvmf/common.sh@296 -- # x722=() 00:28:28.452 19:37:26 -- nvmf/common.sh@296 -- # local -ga x722 00:28:28.452 19:37:26 -- nvmf/common.sh@297 -- # mlx=() 00:28:28.452 19:37:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:28.452 19:37:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.452 19:37:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:28.452 19:37:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:28.452 19:37:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:28.452 19:37:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:28.452 19:37:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:28.452 19:37:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:28.452 19:37:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.452 19:37:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:28.452 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:28.452 19:37:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:28.452 19:37:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:28.452 19:37:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.453 19:37:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:28.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:28.453 19:37:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:28.453 19:37:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.453 19:37:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.453 19:37:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.453 19:37:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.453 19:37:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:28.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:28.453 19:37:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.453 19:37:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.453 19:37:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.453 19:37:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.453 19:37:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.453 19:37:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:28.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:28.453 19:37:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.453 19:37:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:28.453 19:37:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:28.453 19:37:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:28.453 19:37:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.453 19:37:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.453 19:37:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.453 19:37:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:28.453 19:37:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.453 19:37:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.453 19:37:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:28.453 19:37:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.453 19:37:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.453 19:37:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:28.453 19:37:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:28.453 19:37:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.453 19:37:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.453 19:37:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.453 19:37:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.453 19:37:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:28.453 19:37:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.453 19:37:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.453 19:37:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.453 19:37:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:28.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:28:28.453 00:28:28.453 --- 10.0.0.2 ping statistics --- 00:28:28.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.453 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:28:28.453 19:37:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:28:28.453 00:28:28.453 --- 10.0.0.1 ping statistics --- 00:28:28.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.453 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:28:28.453 19:37:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.453 19:37:26 -- nvmf/common.sh@410 -- # return 0 00:28:28.453 19:37:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:28.453 19:37:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.453 19:37:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:28.453 19:37:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.453 19:37:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:28.453 19:37:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:28.712 19:37:26 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:28.712 19:37:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:28.712 19:37:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:28.712 19:37:26 -- common/autotest_common.sh@10 -- # set +x 00:28:28.712 19:37:26 -- nvmf/common.sh@469 -- # nvmfpid=1313212 00:28:28.712 19:37:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:28.712 19:37:26 -- nvmf/common.sh@470 -- # waitforlisten 1313212 00:28:28.712 19:37:26 -- common/autotest_common.sh@829 -- # '[' -z 1313212 ']' 00:28:28.712 19:37:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.712 19:37:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.712 19:37:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.712 19:37:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.712 19:37:26 -- common/autotest_common.sh@10 -- # set +x 00:28:28.712 [2024-11-17 19:37:26.786053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:28.712 [2024-11-17 19:37:26.786147] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.712 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.712 [2024-11-17 19:37:26.857068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.712 [2024-11-17 19:37:26.945758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:28.712 [2024-11-17 19:37:26.945933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.712 [2024-11-17 19:37:26.945954] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.712 [2024-11-17 19:37:26.945968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.712 [2024-11-17 19:37:26.945998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.647 19:37:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.647 19:37:27 -- common/autotest_common.sh@862 -- # return 0 00:28:29.647 19:37:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:29.647 19:37:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:29.647 19:37:27 -- common/autotest_common.sh@10 -- # set +x 00:28:29.647 19:37:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.647 19:37:27 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.647 19:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.647 19:37:27 -- common/autotest_common.sh@10 -- # set +x 00:28:29.647 [2024-11-17 19:37:27.770986] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.647 19:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.647 19:37:27 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:29.647 19:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.647 19:37:27 -- common/autotest_common.sh@10 -- # set +x 00:28:29.647 [2024-11-17 19:37:27.779152] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:29.647 19:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.647 19:37:27 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:29.647 19:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.647 19:37:27 -- common/autotest_common.sh@10 -- # set +x 00:28:29.647 null0 00:28:29.647 19:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.647 19:37:27 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:29.647 19:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.648 19:37:27 -- common/autotest_common.sh@10 -- # set +x 00:28:29.648 null1 00:28:29.648 19:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.648 19:37:27 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:29.648 19:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.648 19:37:27 -- common/autotest_common.sh@10 -- # set +x 00:28:29.648 19:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.648 19:37:27 -- host/discovery.sh@45 -- # hostpid=1313367 00:28:29.648 19:37:27 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:29.648 19:37:27 -- host/discovery.sh@46 -- # waitforlisten 1313367 /tmp/host.sock 00:28:29.648 19:37:27 -- common/autotest_common.sh@829 -- # '[' -z 1313367 ']' 00:28:29.648 19:37:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:28:29.648 19:37:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:29.648 19:37:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:29.648 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:29.648 19:37:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:29.648 19:37:27 -- common/autotest_common.sh@10 -- # set +x 00:28:29.648 [2024-11-17 19:37:27.849553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:29.648 [2024-11-17 19:37:27.849618] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313367 ] 00:28:29.648 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.648 [2024-11-17 19:37:27.911885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.908 [2024-11-17 19:37:28.002778] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:29.908 [2024-11-17 19:37:28.002933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.843 19:37:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:30.843 19:37:28 -- common/autotest_common.sh@862 -- # return 0 00:28:30.843 19:37:28 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:30.843 19:37:28 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:30.843 19:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.843 19:37:28 -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 19:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.843 19:37:28 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:30.843 19:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.843 19:37:28 -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 19:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.843 19:37:28 -- host/discovery.sh@72 -- # notify_id=0 00:28:30.843 19:37:28 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:30.843 19:37:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:30.843 19:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.843 19:37:28 -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 19:37:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:30.843 19:37:28 -- host/discovery.sh@59 -- # sort 00:28:30.843 19:37:28 -- host/discovery.sh@59 -- # xargs 00:28:30.843 19:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.843 19:37:28 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:30.843 19:37:28 -- host/discovery.sh@79 -- # get_bdev_list 00:28:30.843 19:37:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.843 19:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.843 19:37:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:30.843 19:37:28 -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 19:37:28 -- host/discovery.sh@55 -- # sort 00:28:30.843 19:37:28 -- host/discovery.sh@55 -- # xargs 00:28:30.843 19:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.843 19:37:28 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:30.843 19:37:28 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:30.843 19:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.843 19:37:28 -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 19:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.843 19:37:28 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:30.843 19:37:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:30.843 19:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.843 19:37:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:30.843 19:37:28 -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 19:37:28 -- host/discovery.sh@59 -- # sort 00:28:30.843 19:37:28 -- host/discovery.sh@59 -- # xargs 00:28:30.843 19:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.843 19:37:28 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:30.843 19:37:28 -- host/discovery.sh@83 -- # get_bdev_list 00:28:30.843 19:37:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.843 19:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.843 19:37:28 -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 19:37:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:30.843 19:37:28 -- host/discovery.sh@55 -- # sort 00:28:30.843 19:37:28 -- host/discovery.sh@55 -- # xargs 00:28:30.843 19:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.843 19:37:29 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:30.843 19:37:29 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:30.843 19:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.843 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 19:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.843 19:37:29 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:30.843 19:37:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:30.843 19:37:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:30.843 19:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.843 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:28:30.843 19:37:29 -- host/discovery.sh@59 -- # sort 00:28:30.843 19:37:29 -- host/discovery.sh@59 -- # xargs 00:28:30.843 19:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.843 19:37:29 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:30.843 19:37:29 -- host/discovery.sh@87 -- # get_bdev_list 00:28:30.843 19:37:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.844 19:37:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:30.844 19:37:29 -- host/discovery.sh@55 -- # sort 00:28:30.844 19:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.844 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:28:30.844 19:37:29 -- host/discovery.sh@55 -- # xargs 00:28:30.844 19:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.844 19:37:29 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:30.844 19:37:29 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:30.844 19:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.844 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:28:30.844 [2024-11-17 19:37:29.094875] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.844 19:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.844 19:37:29 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:30.844 19:37:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:30.844 19:37:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:30.844 19:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.844 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:28:30.844 19:37:29 -- host/discovery.sh@59 -- # sort 00:28:30.844 19:37:29 -- host/discovery.sh@59 -- # xargs 00:28:31.112 19:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.113 19:37:29 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:31.113 19:37:29 -- host/discovery.sh@93 -- # get_bdev_list 00:28:31.113 19:37:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.113 19:37:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.113 19:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.113 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:28:31.113 19:37:29 -- host/discovery.sh@55 -- # sort 00:28:31.113 19:37:29 -- host/discovery.sh@55 -- # xargs 00:28:31.113 19:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.113 19:37:29 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:31.113 19:37:29 -- host/discovery.sh@94 -- # get_notification_count 00:28:31.113 19:37:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:31.113 19:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.113 19:37:29 -- host/discovery.sh@74 -- # jq '. | length' 00:28:31.113 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:28:31.113 19:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.113 19:37:29 -- host/discovery.sh@74 -- # notification_count=0 00:28:31.113 19:37:29 -- host/discovery.sh@75 -- # notify_id=0 00:28:31.113 19:37:29 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:31.113 19:37:29 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:31.113 19:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.113 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:28:31.113 19:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.113 19:37:29 -- host/discovery.sh@100 -- # sleep 1 00:28:31.686 [2024-11-17 19:37:29.849687] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:31.686 [2024-11-17 19:37:29.849735] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:31.686 [2024-11-17 19:37:29.849759] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:31.686 [2024-11-17 19:37:29.937063] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:31.945 [2024-11-17 19:37:30.162556] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:31.945 [2024-11-17 19:37:30.162607] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:32.203 19:37:30 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:32.203 19:37:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:32.203 19:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.203 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:28:32.203 19:37:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:32.203 19:37:30 -- host/discovery.sh@59 -- # sort 00:28:32.203 19:37:30 -- host/discovery.sh@59 -- # xargs 00:28:32.203 19:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.203 19:37:30 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.203 19:37:30 -- host/discovery.sh@102 -- # get_bdev_list 00:28:32.203 19:37:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.203 19:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.203 19:37:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:32.203 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:28:32.203 19:37:30 -- host/discovery.sh@55 -- # sort 00:28:32.203 19:37:30 -- host/discovery.sh@55 -- # xargs 00:28:32.203 19:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.203 19:37:30 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:32.203 19:37:30 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:32.203 19:37:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:32.203 19:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.203 19:37:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:32.203 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:28:32.204 19:37:30 -- host/discovery.sh@63 -- # sort -n 00:28:32.204 19:37:30 -- host/discovery.sh@63 -- # xargs 00:28:32.204 19:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.204 19:37:30 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:32.204 19:37:30 -- host/discovery.sh@104 -- # get_notification_count 00:28:32.204 19:37:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:32.204 19:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.204 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:28:32.204 19:37:30 -- host/discovery.sh@74 -- # jq '. | length' 00:28:32.204 19:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.204 19:37:30 -- host/discovery.sh@74 -- # notification_count=1 00:28:32.204 19:37:30 -- host/discovery.sh@75 -- # notify_id=1 00:28:32.204 19:37:30 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:32.204 19:37:30 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:32.204 19:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.204 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:28:32.204 19:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.204 19:37:30 -- host/discovery.sh@109 -- # sleep 1 00:28:33.140 19:37:31 -- host/discovery.sh@110 -- # get_bdev_list 00:28:33.140 19:37:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.140 19:37:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:33.140 19:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.141 19:37:31 -- common/autotest_common.sh@10 -- # set +x 00:28:33.141 19:37:31 -- host/discovery.sh@55 -- # sort 00:28:33.141 19:37:31 -- host/discovery.sh@55 -- # xargs 00:28:33.399 19:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.399 19:37:31 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:33.399 19:37:31 -- host/discovery.sh@111 -- # get_notification_count 00:28:33.399 19:37:31 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:33.399 19:37:31 -- host/discovery.sh@74 -- # jq '. | length' 00:28:33.399 19:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.399 19:37:31 -- common/autotest_common.sh@10 -- # set +x 00:28:33.399 19:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.399 19:37:31 -- host/discovery.sh@74 -- # notification_count=1 00:28:33.399 19:37:31 -- host/discovery.sh@75 -- # notify_id=2 00:28:33.399 19:37:31 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:33.399 19:37:31 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:33.399 19:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.399 19:37:31 -- common/autotest_common.sh@10 -- # set +x 00:28:33.399 [2024-11-17 19:37:31.473959] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:33.399 [2024-11-17 19:37:31.474233] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:33.399 [2024-11-17 19:37:31.474267] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:33.399 19:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.399 19:37:31 -- host/discovery.sh@117 -- # sleep 1 00:28:33.399 [2024-11-17 19:37:31.560523] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:33.399 [2024-11-17 19:37:31.617985] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:33.399 [2024-11-17 19:37:31.618010] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:33.399 [2024-11-17 19:37:31.618021] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:34.333 19:37:32 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:34.333 19:37:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:34.333 19:37:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:34.333 19:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.333 19:37:32 -- host/discovery.sh@59 -- # sort 00:28:34.333 19:37:32 -- common/autotest_common.sh@10 -- # set +x 00:28:34.333 19:37:32 -- host/discovery.sh@59 -- # xargs 00:28:34.333 19:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.333 19:37:32 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.333 19:37:32 -- host/discovery.sh@119 -- # get_bdev_list 00:28:34.333 19:37:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.333 19:37:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:34.333 19:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.333 19:37:32 -- host/discovery.sh@55 -- # sort 00:28:34.333 19:37:32 -- common/autotest_common.sh@10 -- # set +x 00:28:34.333 19:37:32 -- host/discovery.sh@55 -- # xargs 00:28:34.333 19:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.333 19:37:32 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:34.333 19:37:32 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:34.333 19:37:32 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:34.333 19:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.333 19:37:32 -- common/autotest_common.sh@10 -- # set +x 00:28:34.333 19:37:32 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:34.333 19:37:32 -- host/discovery.sh@63 -- # sort -n 00:28:34.333 19:37:32 -- host/discovery.sh@63 -- # xargs 00:28:34.333 19:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.333 19:37:32 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:34.333 19:37:32 -- host/discovery.sh@121 -- # get_notification_count 00:28:34.333 19:37:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:34.333 19:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.333 19:37:32 -- common/autotest_common.sh@10 -- # set +x 00:28:34.333 19:37:32 -- host/discovery.sh@74 -- # jq '. | length' 00:28:34.592 19:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.592 19:37:32 -- host/discovery.sh@74 -- # notification_count=0 00:28:34.592 19:37:32 -- host/discovery.sh@75 -- # notify_id=2 00:28:34.592 19:37:32 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:34.592 19:37:32 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:34.592 19:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.592 19:37:32 -- common/autotest_common.sh@10 -- # set +x 00:28:34.592 [2024-11-17 19:37:32.637519] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:34.592 [2024-11-17 19:37:32.637554] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:34.592 [2024-11-17 19:37:32.638989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.592 [2024-11-17 19:37:32.639016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.592 [2024-11-17 19:37:32.639046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.592 [2024-11-17 19:37:32.639059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.592 [2024-11-17 19:37:32.639072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.592 [2024-11-17 19:37:32.639098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.592 [2024-11-17 19:37:32.639111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.592 [2024-11-17 19:37:32.639123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.592 [2024-11-17 19:37:32.639136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.592 19:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.592 19:37:32 -- host/discovery.sh@127 -- # sleep 1 00:28:34.592 [2024-11-17 19:37:32.648985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.592 [2024-11-17 19:37:32.659044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.592 [2024-11-17 19:37:32.659306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.592 [2024-11-17 19:37:32.659407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.592 [2024-11-17 19:37:32.659450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.592 [2024-11-17 19:37:32.659469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.592 [2024-11-17 19:37:32.659510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.592 [2024-11-17 19:37:32.659531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.592 [2024-11-17 19:37:32.659545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.592 [2024-11-17 19:37:32.659560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.592 [2024-11-17 19:37:32.659580] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.592 [2024-11-17 19:37:32.669124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.592 [2024-11-17 19:37:32.669381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.592 [2024-11-17 19:37:32.669494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.592 [2024-11-17 19:37:32.669520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.592 [2024-11-17 19:37:32.669536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.592 [2024-11-17 19:37:32.669558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.592 [2024-11-17 19:37:32.669578] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.592 [2024-11-17 19:37:32.669591] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.592 [2024-11-17 19:37:32.669604] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.592 [2024-11-17 19:37:32.669622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.592 [2024-11-17 19:37:32.679206] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.592 [2024-11-17 19:37:32.679409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.592 [2024-11-17 19:37:32.679526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.592 [2024-11-17 19:37:32.679552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.592 [2024-11-17 19:37:32.679568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.592 [2024-11-17 19:37:32.679590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.592 [2024-11-17 19:37:32.679609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.592 [2024-11-17 19:37:32.679622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.592 [2024-11-17 19:37:32.679635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.592 [2024-11-17 19:37:32.679653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.592 [2024-11-17 19:37:32.689283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.592 [2024-11-17 19:37:32.689544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.592 [2024-11-17 19:37:32.689657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.592 [2024-11-17 19:37:32.689690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.592 [2024-11-17 19:37:32.689707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.592 [2024-11-17 19:37:32.689729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.592 [2024-11-17 19:37:32.689762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.593 [2024-11-17 19:37:32.689779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.593 [2024-11-17 19:37:32.689792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.593 [2024-11-17 19:37:32.689821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.593 [2024-11-17 19:37:32.699360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.593 [2024-11-17 19:37:32.699535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.699742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.699768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.593 [2024-11-17 19:37:32.699784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.593 [2024-11-17 19:37:32.699805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.593 [2024-11-17 19:37:32.699840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.593 [2024-11-17 19:37:32.699857] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.593 [2024-11-17 19:37:32.699870] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.593 [2024-11-17 19:37:32.699889] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.593 [2024-11-17 19:37:32.709433] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.593 [2024-11-17 19:37:32.709696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.709787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.709813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.593 [2024-11-17 19:37:32.709829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.593 [2024-11-17 19:37:32.709850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.593 [2024-11-17 19:37:32.709893] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.593 [2024-11-17 19:37:32.709912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.593 [2024-11-17 19:37:32.709925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.593 [2024-11-17 19:37:32.709945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.593 [2024-11-17 19:37:32.719520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.593 [2024-11-17 19:37:32.719765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.719883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.719908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.593 [2024-11-17 19:37:32.719924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.593 [2024-11-17 19:37:32.719945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.593 [2024-11-17 19:37:32.720001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.593 [2024-11-17 19:37:32.720021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.593 [2024-11-17 19:37:32.720035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.593 [2024-11-17 19:37:32.720056] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.593 [2024-11-17 19:37:32.729597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.593 [2024-11-17 19:37:32.729813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.729969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.729998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.593 [2024-11-17 19:37:32.730021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.593 [2024-11-17 19:37:32.730046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.593 [2024-11-17 19:37:32.730093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.593 [2024-11-17 19:37:32.730115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.593 [2024-11-17 19:37:32.730131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.593 [2024-11-17 19:37:32.730153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.593 [2024-11-17 19:37:32.739681] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.593 [2024-11-17 19:37:32.739866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.739990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.740015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.593 [2024-11-17 19:37:32.740031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.593 [2024-11-17 19:37:32.740053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.593 [2024-11-17 19:37:32.740073] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.593 [2024-11-17 19:37:32.740086] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.593 [2024-11-17 19:37:32.740098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.593 [2024-11-17 19:37:32.740117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.593 [2024-11-17 19:37:32.749768] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.593 [2024-11-17 19:37:32.749953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.750104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.750132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.593 [2024-11-17 19:37:32.750149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.593 [2024-11-17 19:37:32.750173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.593 [2024-11-17 19:37:32.750221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.593 [2024-11-17 19:37:32.750242] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.593 [2024-11-17 19:37:32.750257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.593 [2024-11-17 19:37:32.750278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.593 [2024-11-17 19:37:32.759849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.593 [2024-11-17 19:37:32.759995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.760164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.593 [2024-11-17 19:37:32.760193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22b80 with addr=10.0.0.2, port=4420 00:28:34.593 [2024-11-17 19:37:32.760211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22b80 is same with the state(5) to be set 00:28:34.593 [2024-11-17 19:37:32.760240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22b80 (9): Bad file descriptor 00:28:34.593 [2024-11-17 19:37:32.760278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:34.593 [2024-11-17 19:37:32.760297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:34.593 [2024-11-17 19:37:32.760312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:34.593 [2024-11-17 19:37:32.760332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.593 [2024-11-17 19:37:32.766290] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:34.593 [2024-11-17 19:37:32.766322] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:35.536 19:37:33 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:35.536 19:37:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:35.536 19:37:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:35.536 19:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.536 19:37:33 -- host/discovery.sh@59 -- # sort 00:28:35.536 19:37:33 -- common/autotest_common.sh@10 -- # set +x 00:28:35.536 19:37:33 -- host/discovery.sh@59 -- # xargs 00:28:35.536 19:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.536 19:37:33 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.536 19:37:33 -- host/discovery.sh@129 -- # get_bdev_list 00:28:35.536 19:37:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.536 19:37:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:35.536 19:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.536 19:37:33 -- host/discovery.sh@55 -- # sort 00:28:35.536 19:37:33 -- common/autotest_common.sh@10 -- # set +x 00:28:35.536 19:37:33 -- host/discovery.sh@55 -- # xargs 00:28:35.536 19:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.536 19:37:33 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:35.536 19:37:33 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:35.536 19:37:33 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:35.536 19:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.536 19:37:33 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:35.536 19:37:33 -- common/autotest_common.sh@10 -- # set +x 00:28:35.536 19:37:33 -- host/discovery.sh@63 -- # sort -n 00:28:35.536 19:37:33 -- host/discovery.sh@63 -- # xargs 00:28:35.536 19:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.536 19:37:33 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:35.536 19:37:33 -- host/discovery.sh@131 -- # get_notification_count 00:28:35.536 19:37:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:35.536 19:37:33 -- host/discovery.sh@74 -- # jq '. | length' 00:28:35.536 19:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.536 19:37:33 -- common/autotest_common.sh@10 -- # set +x 00:28:35.536 19:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.850 19:37:33 -- host/discovery.sh@74 -- # notification_count=0 00:28:35.850 19:37:33 -- host/discovery.sh@75 -- # notify_id=2 00:28:35.850 19:37:33 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:35.850 19:37:33 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:35.850 19:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.850 19:37:33 -- common/autotest_common.sh@10 -- # set +x 00:28:35.850 19:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.850 19:37:33 -- host/discovery.sh@135 -- # sleep 1 00:28:36.811 19:37:34 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:36.811 19:37:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:36.811 19:37:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:36.811 19:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.811 19:37:34 -- host/discovery.sh@59 -- # sort 00:28:36.811 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:28:36.811 19:37:34 -- host/discovery.sh@59 -- # xargs 00:28:36.811 19:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.811 19:37:34 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:36.811 19:37:34 -- host/discovery.sh@137 -- # get_bdev_list 00:28:36.811 19:37:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.811 19:37:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:36.811 19:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.811 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:28:36.811 19:37:34 -- host/discovery.sh@55 -- # sort 00:28:36.811 19:37:34 -- host/discovery.sh@55 -- # xargs 00:28:36.811 19:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.811 19:37:34 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:36.811 19:37:34 -- host/discovery.sh@138 -- # get_notification_count 00:28:36.811 19:37:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:36.811 19:37:34 -- host/discovery.sh@74 -- # jq '. | length' 00:28:36.811 19:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.811 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:28:36.811 19:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.811 19:37:34 -- host/discovery.sh@74 -- # notification_count=2 00:28:36.811 19:37:34 -- host/discovery.sh@75 -- # notify_id=4 00:28:36.811 19:37:34 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:36.811 19:37:34 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:36.811 19:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.811 19:37:34 -- common/autotest_common.sh@10 -- # set +x 00:28:38.184 [2024-11-17 19:37:36.036825] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:38.184 [2024-11-17 19:37:36.036847] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:38.184 [2024-11-17 19:37:36.036868] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:38.184 [2024-11-17 19:37:36.123169] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:38.184 [2024-11-17 19:37:36.432080] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:38.184 [2024-11-17 19:37:36.432120] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:38.184 19:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.184 19:37:36 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.184 19:37:36 -- common/autotest_common.sh@650 -- # local es=0 00:28:38.184 19:37:36 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.184 19:37:36 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:38.184 19:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.184 19:37:36 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:38.184 19:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.184 19:37:36 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.184 19:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.184 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:28:38.184 request: 00:28:38.184 { 00:28:38.184 "name": "nvme", 00:28:38.184 "trtype": "tcp", 00:28:38.184 "traddr": "10.0.0.2", 00:28:38.184 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:38.184 "adrfam": "ipv4", 00:28:38.184 "trsvcid": "8009", 00:28:38.184 "wait_for_attach": true, 00:28:38.184 "method": "bdev_nvme_start_discovery", 00:28:38.184 "req_id": 1 00:28:38.184 } 00:28:38.184 Got JSON-RPC error response 00:28:38.184 response: 00:28:38.184 { 00:28:38.184 "code": -17, 00:28:38.184 "message": "File exists" 00:28:38.184 } 00:28:38.184 19:37:36 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:38.184 19:37:36 -- common/autotest_common.sh@653 -- # es=1 00:28:38.184 19:37:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:38.184 19:37:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:38.184 19:37:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:38.184 19:37:36 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:38.185 19:37:36 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:38.185 19:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.185 19:37:36 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:38.185 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:28:38.185 19:37:36 -- host/discovery.sh@67 -- # sort 00:28:38.185 19:37:36 -- host/discovery.sh@67 -- # xargs 00:28:38.442 19:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.443 19:37:36 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:38.443 19:37:36 -- host/discovery.sh@147 -- # get_bdev_list 00:28:38.443 19:37:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.443 19:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.443 19:37:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:38.443 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:28:38.443 19:37:36 -- host/discovery.sh@55 -- # sort 00:28:38.443 19:37:36 -- host/discovery.sh@55 -- # xargs 00:28:38.443 19:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.443 19:37:36 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:38.443 19:37:36 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.443 19:37:36 -- common/autotest_common.sh@650 -- # local es=0 00:28:38.443 19:37:36 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.443 19:37:36 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:38.443 19:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.443 19:37:36 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:38.443 19:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.443 19:37:36 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.443 19:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.443 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:28:38.443 request: 00:28:38.443 { 00:28:38.443 "name": "nvme_second", 00:28:38.443 "trtype": "tcp", 00:28:38.443 "traddr": "10.0.0.2", 00:28:38.443 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:38.443 "adrfam": "ipv4", 00:28:38.443 "trsvcid": "8009", 00:28:38.443 "wait_for_attach": true, 00:28:38.443 "method": "bdev_nvme_start_discovery", 00:28:38.443 "req_id": 1 00:28:38.443 } 00:28:38.443 Got JSON-RPC error response 00:28:38.443 response: 00:28:38.443 { 00:28:38.443 "code": -17, 00:28:38.443 "message": "File exists" 00:28:38.443 } 00:28:38.443 19:37:36 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:38.443 19:37:36 -- common/autotest_common.sh@653 -- # es=1 00:28:38.443 19:37:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:38.443 19:37:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:38.443 19:37:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:38.443 19:37:36 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:38.443 19:37:36 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:38.443 19:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.443 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:28:38.443 19:37:36 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:38.443 19:37:36 -- host/discovery.sh@67 -- # sort 00:28:38.443 19:37:36 -- host/discovery.sh@67 -- # xargs 00:28:38.443 19:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.443 19:37:36 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:38.443 19:37:36 -- host/discovery.sh@153 -- # get_bdev_list 00:28:38.443 19:37:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.443 19:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.443 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:28:38.443 19:37:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:38.443 19:37:36 -- host/discovery.sh@55 -- # sort 00:28:38.443 19:37:36 -- host/discovery.sh@55 -- # xargs 00:28:38.443 19:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.443 19:37:36 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:38.443 19:37:36 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:38.443 19:37:36 -- common/autotest_common.sh@650 -- # local es=0 00:28:38.443 19:37:36 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:38.443 19:37:36 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:38.443 19:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.443 19:37:36 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:38.443 19:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:38.443 19:37:36 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:38.443 19:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.443 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:28:39.376 [2024-11-17 19:37:37.632416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.376 [2024-11-17 19:37:37.632587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.376 [2024-11-17 19:37:37.632617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd5a3c0 with addr=10.0.0.2, port=8010 00:28:39.376 [2024-11-17 19:37:37.632640] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:39.376 [2024-11-17 19:37:37.632654] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:39.376 [2024-11-17 19:37:37.632667] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:40.749 [2024-11-17 19:37:38.634840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.749 [2024-11-17 19:37:38.634957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.749 [2024-11-17 19:37:38.634982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd5a3c0 with addr=10.0.0.2, port=8010 00:28:40.749 [2024-11-17 19:37:38.635004] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:40.750 [2024-11-17 19:37:38.635016] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:40.750 [2024-11-17 19:37:38.635028] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:41.682 [2024-11-17 19:37:39.637172] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:41.682 request: 00:28:41.682 { 00:28:41.682 "name": "nvme_second", 00:28:41.682 "trtype": "tcp", 00:28:41.682 "traddr": "10.0.0.2", 00:28:41.682 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:41.682 "adrfam": "ipv4", 00:28:41.682 "trsvcid": "8010", 00:28:41.682 "attach_timeout_ms": 3000, 00:28:41.682 "method": "bdev_nvme_start_discovery", 00:28:41.682 "req_id": 1 00:28:41.682 } 00:28:41.682 Got JSON-RPC error response 00:28:41.682 response: 00:28:41.682 { 00:28:41.682 "code": -110, 00:28:41.682 "message": "Connection timed out" 00:28:41.682 } 00:28:41.682 19:37:39 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:41.682 19:37:39 -- common/autotest_common.sh@653 -- # es=1 00:28:41.682 19:37:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:41.682 19:37:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:41.682 19:37:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:41.682 19:37:39 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:41.682 19:37:39 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:41.682 19:37:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.682 19:37:39 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:41.682 19:37:39 -- common/autotest_common.sh@10 -- # set +x 00:28:41.682 19:37:39 -- host/discovery.sh@67 -- # sort 00:28:41.682 19:37:39 -- host/discovery.sh@67 -- # xargs 00:28:41.682 19:37:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.682 19:37:39 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:41.682 19:37:39 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:41.682 19:37:39 -- host/discovery.sh@162 -- # kill 1313367 00:28:41.682 19:37:39 -- host/discovery.sh@163 -- # nvmftestfini 00:28:41.682 19:37:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:41.682 19:37:39 -- nvmf/common.sh@116 -- # sync 00:28:41.682 19:37:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:41.682 19:37:39 -- nvmf/common.sh@119 -- # set +e 00:28:41.682 19:37:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:41.682 19:37:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:41.682 rmmod nvme_tcp 00:28:41.682 rmmod nvme_fabrics 00:28:41.682 rmmod nvme_keyring 00:28:41.682 19:37:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:41.682 19:37:39 -- nvmf/common.sh@123 -- # set -e 00:28:41.682 19:37:39 -- nvmf/common.sh@124 -- # return 0 00:28:41.682 19:37:39 -- nvmf/common.sh@477 -- # '[' -n 1313212 ']' 00:28:41.682 19:37:39 -- nvmf/common.sh@478 -- # killprocess 1313212 00:28:41.682 19:37:39 -- common/autotest_common.sh@936 -- # '[' -z 1313212 ']' 00:28:41.682 19:37:39 -- common/autotest_common.sh@940 -- # kill -0 1313212 00:28:41.682 19:37:39 -- common/autotest_common.sh@941 -- # uname 00:28:41.682 19:37:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:41.682 19:37:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1313212 00:28:41.682 19:37:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:41.682 19:37:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:41.682 19:37:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1313212' 00:28:41.682 killing process with pid 1313212 00:28:41.682 19:37:39 -- common/autotest_common.sh@955 -- # kill 1313212 00:28:41.682 19:37:39 -- common/autotest_common.sh@960 -- # wait 1313212 00:28:41.941 19:37:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:41.941 19:37:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:41.941 19:37:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:41.941 19:37:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:41.941 19:37:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:41.941 19:37:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.941 19:37:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.941 19:37:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.842 19:37:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:43.842 00:28:43.842 real 0m17.746s 00:28:43.842 user 0m27.377s 00:28:43.842 sys 0m2.850s 00:28:43.842 19:37:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:43.842 19:37:42 -- common/autotest_common.sh@10 -- # set +x 00:28:43.842 ************************************ 00:28:43.842 END TEST nvmf_discovery 00:28:43.842 ************************************ 00:28:43.842 19:37:42 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:43.842 19:37:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:43.842 19:37:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:43.842 19:37:42 -- common/autotest_common.sh@10 -- # set +x 00:28:43.842 ************************************ 00:28:43.842 START TEST nvmf_discovery_remove_ifc 00:28:43.842 ************************************ 00:28:43.842 19:37:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:44.101 * Looking for test storage... 00:28:44.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:44.101 19:37:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:44.101 19:37:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:44.101 19:37:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:44.101 19:37:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:44.101 19:37:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:44.101 19:37:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:44.101 19:37:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:44.101 19:37:42 -- scripts/common.sh@335 -- # IFS=.-: 00:28:44.101 19:37:42 -- scripts/common.sh@335 -- # read -ra ver1 00:28:44.101 19:37:42 -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.101 19:37:42 -- scripts/common.sh@336 -- # read -ra ver2 00:28:44.101 19:37:42 -- scripts/common.sh@337 -- # local 'op=<' 00:28:44.101 19:37:42 -- scripts/common.sh@339 -- # ver1_l=2 00:28:44.101 19:37:42 -- scripts/common.sh@340 -- # ver2_l=1 00:28:44.101 19:37:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:44.101 19:37:42 -- scripts/common.sh@343 -- # case "$op" in 00:28:44.101 19:37:42 -- scripts/common.sh@344 -- # : 1 00:28:44.101 19:37:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:44.101 19:37:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.101 19:37:42 -- scripts/common.sh@364 -- # decimal 1 00:28:44.101 19:37:42 -- scripts/common.sh@352 -- # local d=1 00:28:44.101 19:37:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.101 19:37:42 -- scripts/common.sh@354 -- # echo 1 00:28:44.101 19:37:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:44.101 19:37:42 -- scripts/common.sh@365 -- # decimal 2 00:28:44.101 19:37:42 -- scripts/common.sh@352 -- # local d=2 00:28:44.101 19:37:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.101 19:37:42 -- scripts/common.sh@354 -- # echo 2 00:28:44.101 19:37:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:44.101 19:37:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:44.101 19:37:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:44.101 19:37:42 -- scripts/common.sh@367 -- # return 0 00:28:44.101 19:37:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.101 19:37:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.101 --rc genhtml_branch_coverage=1 00:28:44.101 --rc genhtml_function_coverage=1 00:28:44.101 --rc genhtml_legend=1 00:28:44.101 --rc geninfo_all_blocks=1 00:28:44.101 --rc geninfo_unexecuted_blocks=1 00:28:44.101 00:28:44.101 ' 00:28:44.101 19:37:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.101 --rc genhtml_branch_coverage=1 00:28:44.101 --rc genhtml_function_coverage=1 00:28:44.101 --rc genhtml_legend=1 00:28:44.101 --rc geninfo_all_blocks=1 00:28:44.101 --rc geninfo_unexecuted_blocks=1 00:28:44.101 00:28:44.101 ' 00:28:44.101 19:37:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.101 --rc genhtml_branch_coverage=1 00:28:44.101 --rc genhtml_function_coverage=1 00:28:44.101 --rc genhtml_legend=1 00:28:44.101 --rc geninfo_all_blocks=1 00:28:44.101 --rc geninfo_unexecuted_blocks=1 00:28:44.101 00:28:44.101 ' 00:28:44.101 19:37:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:44.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.101 --rc genhtml_branch_coverage=1 00:28:44.101 --rc genhtml_function_coverage=1 00:28:44.101 --rc genhtml_legend=1 00:28:44.101 --rc geninfo_all_blocks=1 00:28:44.101 --rc geninfo_unexecuted_blocks=1 00:28:44.101 00:28:44.101 ' 00:28:44.101 19:37:42 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.101 19:37:42 -- nvmf/common.sh@7 -- # uname -s 00:28:44.101 19:37:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.101 19:37:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.101 19:37:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.102 19:37:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.102 19:37:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.102 19:37:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.102 19:37:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.102 19:37:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.102 19:37:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.102 19:37:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.102 19:37:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.102 19:37:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.102 19:37:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.102 19:37:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.102 19:37:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.102 19:37:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.102 19:37:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.102 19:37:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.102 19:37:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.102 19:37:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.102 19:37:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.102 19:37:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.102 19:37:42 -- paths/export.sh@5 -- # export PATH 00:28:44.102 19:37:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.102 19:37:42 -- nvmf/common.sh@46 -- # : 0 00:28:44.102 19:37:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:44.102 19:37:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:44.102 19:37:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:44.102 19:37:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.102 19:37:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.102 19:37:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:44.102 19:37:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:44.102 19:37:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:44.102 19:37:42 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:44.102 19:37:42 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:44.102 19:37:42 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:44.102 19:37:42 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:44.102 19:37:42 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:44.102 19:37:42 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:44.102 19:37:42 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:44.102 19:37:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:44.102 19:37:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.102 19:37:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:44.102 19:37:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:44.102 19:37:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:44.102 19:37:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.102 19:37:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.102 19:37:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.102 19:37:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:44.102 19:37:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:44.102 19:37:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:44.102 19:37:42 -- common/autotest_common.sh@10 -- # set +x 00:28:46.003 19:37:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:46.003 19:37:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:46.003 19:37:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:46.003 19:37:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:46.003 19:37:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:46.003 19:37:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:46.003 19:37:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:46.003 19:37:44 -- nvmf/common.sh@294 -- # net_devs=() 00:28:46.003 19:37:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:46.003 19:37:44 -- nvmf/common.sh@295 -- # e810=() 00:28:46.003 19:37:44 -- nvmf/common.sh@295 -- # local -ga e810 00:28:46.003 19:37:44 -- nvmf/common.sh@296 -- # x722=() 00:28:46.003 19:37:44 -- nvmf/common.sh@296 -- # local -ga x722 00:28:46.003 19:37:44 -- nvmf/common.sh@297 -- # mlx=() 00:28:46.003 19:37:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:46.003 19:37:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.003 19:37:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:46.003 19:37:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:46.003 19:37:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:46.003 19:37:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:46.003 19:37:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:46.003 19:37:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:46.003 19:37:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:46.003 19:37:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:46.003 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:46.003 19:37:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:46.003 19:37:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:46.003 19:37:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.003 19:37:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.003 19:37:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:46.003 19:37:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:46.003 19:37:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:46.003 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:46.003 19:37:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:46.004 19:37:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:46.004 19:37:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.004 19:37:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:46.004 19:37:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.004 19:37:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:46.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:46.004 19:37:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.004 19:37:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:46.004 19:37:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.004 19:37:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:46.004 19:37:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.004 19:37:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:46.004 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:46.004 19:37:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.004 19:37:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:46.004 19:37:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:46.004 19:37:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:46.004 19:37:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.004 19:37:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.004 19:37:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.004 19:37:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:46.004 19:37:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.004 19:37:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.004 19:37:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:46.004 19:37:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.004 19:37:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.004 19:37:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:46.004 19:37:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:46.004 19:37:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.004 19:37:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.004 19:37:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.004 19:37:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.004 19:37:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:46.004 19:37:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.004 19:37:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.004 19:37:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.004 19:37:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:46.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:28:46.004 00:28:46.004 --- 10.0.0.2 ping statistics --- 00:28:46.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.004 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:28:46.004 19:37:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:28:46.004 00:28:46.004 --- 10.0.0.1 ping statistics --- 00:28:46.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.004 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:46.004 19:37:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.004 19:37:44 -- nvmf/common.sh@410 -- # return 0 00:28:46.004 19:37:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:46.004 19:37:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.004 19:37:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:46.004 19:37:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.004 19:37:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:46.004 19:37:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:46.004 19:37:44 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:46.004 19:37:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:46.004 19:37:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:46.004 19:37:44 -- common/autotest_common.sh@10 -- # set +x 00:28:46.004 19:37:44 -- nvmf/common.sh@469 -- # nvmfpid=1316963 00:28:46.004 19:37:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:46.004 19:37:44 -- nvmf/common.sh@470 -- # waitforlisten 1316963 00:28:46.004 19:37:44 -- common/autotest_common.sh@829 -- # '[' -z 1316963 ']' 00:28:46.004 19:37:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.004 19:37:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.004 19:37:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.004 19:37:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.004 19:37:44 -- common/autotest_common.sh@10 -- # set +x 00:28:46.262 [2024-11-17 19:37:44.285214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:46.262 [2024-11-17 19:37:44.285294] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.262 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.262 [2024-11-17 19:37:44.354352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.262 [2024-11-17 19:37:44.442004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:46.262 [2024-11-17 19:37:44.442180] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.262 [2024-11-17 19:37:44.442198] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.262 [2024-11-17 19:37:44.442213] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.262 [2024-11-17 19:37:44.442245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.196 19:37:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.196 19:37:45 -- common/autotest_common.sh@862 -- # return 0 00:28:47.196 19:37:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:47.196 19:37:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:47.196 19:37:45 -- common/autotest_common.sh@10 -- # set +x 00:28:47.196 19:37:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.196 19:37:45 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:47.196 19:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.196 19:37:45 -- common/autotest_common.sh@10 -- # set +x 00:28:47.196 [2024-11-17 19:37:45.268791] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.196 [2024-11-17 19:37:45.276981] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:47.196 null0 00:28:47.196 [2024-11-17 19:37:45.308876] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.196 19:37:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.196 19:37:45 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1317118 00:28:47.196 19:37:45 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1317118 /tmp/host.sock 00:28:47.196 19:37:45 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:47.196 19:37:45 -- common/autotest_common.sh@829 -- # '[' -z 1317118 ']' 00:28:47.196 19:37:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:28:47.196 19:37:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:47.196 19:37:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:47.196 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:47.196 19:37:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:47.196 19:37:45 -- common/autotest_common.sh@10 -- # set +x 00:28:47.196 [2024-11-17 19:37:45.372167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:47.196 [2024-11-17 19:37:45.372245] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317118 ] 00:28:47.196 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.196 [2024-11-17 19:37:45.432459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.455 [2024-11-17 19:37:45.518588] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:47.455 [2024-11-17 19:37:45.518787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.455 19:37:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.455 19:37:45 -- common/autotest_common.sh@862 -- # return 0 00:28:47.455 19:37:45 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:47.455 19:37:45 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:47.455 19:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.455 19:37:45 -- common/autotest_common.sh@10 -- # set +x 00:28:47.455 19:37:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.455 19:37:45 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:47.455 19:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.455 19:37:45 -- common/autotest_common.sh@10 -- # set +x 00:28:47.455 19:37:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.455 19:37:45 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:47.455 19:37:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.455 19:37:45 -- common/autotest_common.sh@10 -- # set +x 00:28:48.828 [2024-11-17 19:37:46.756801] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:48.828 [2024-11-17 19:37:46.756843] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:48.828 [2024-11-17 19:37:46.756866] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:48.828 [2024-11-17 19:37:46.843151] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:48.828 [2024-11-17 19:37:47.068374] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:48.828 [2024-11-17 19:37:47.068433] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:48.828 [2024-11-17 19:37:47.068477] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:48.828 [2024-11-17 19:37:47.068508] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:48.828 [2024-11-17 19:37:47.068552] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:48.828 19:37:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.828 19:37:47 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:48.828 19:37:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:48.828 19:37:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:48.828 19:37:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:48.828 19:37:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.828 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:28:48.828 19:37:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:48.828 19:37:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:48.828 19:37:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:49.086 [2024-11-17 19:37:47.115380] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1edf4b0 was disconnected and freed. delete nvme_qpair. 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:49.086 19:37:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.086 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:49.086 19:37:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:49.086 19:37:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:50.018 19:37:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:50.018 19:37:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:50.018 19:37:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:50.018 19:37:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.018 19:37:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:50.018 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:28:50.018 19:37:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:50.018 19:37:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.018 19:37:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:50.018 19:37:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:51.391 19:37:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:51.391 19:37:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.391 19:37:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.391 19:37:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:51.391 19:37:49 -- common/autotest_common.sh@10 -- # set +x 00:28:51.391 19:37:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:51.391 19:37:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:51.391 19:37:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.391 19:37:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:51.391 19:37:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:52.325 19:37:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.325 19:37:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.325 19:37:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.325 19:37:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.325 19:37:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.325 19:37:50 -- common/autotest_common.sh@10 -- # set +x 00:28:52.325 19:37:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:52.325 19:37:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.325 19:37:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:52.325 19:37:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:53.258 19:37:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:53.258 19:37:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.258 19:37:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.258 19:37:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:53.258 19:37:51 -- common/autotest_common.sh@10 -- # set +x 00:28:53.258 19:37:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:53.258 19:37:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:53.258 19:37:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.258 19:37:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:53.258 19:37:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:54.192 19:37:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:54.192 19:37:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.192 19:37:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:54.192 19:37:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.192 19:37:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:54.192 19:37:52 -- common/autotest_common.sh@10 -- # set +x 00:28:54.192 19:37:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:54.192 19:37:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.192 19:37:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:54.192 19:37:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:54.449 [2024-11-17 19:37:52.509696] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:54.449 [2024-11-17 19:37:52.509774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.449 [2024-11-17 19:37:52.509811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.449 [2024-11-17 19:37:52.509828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.449 [2024-11-17 19:37:52.509841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.449 [2024-11-17 19:37:52.509854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.449 [2024-11-17 19:37:52.509866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.449 [2024-11-17 19:37:52.509879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.449 [2024-11-17 19:37:52.509891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.449 [2024-11-17 19:37:52.509904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.450 [2024-11-17 19:37:52.509916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.450 [2024-11-17 19:37:52.509929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea6790 is same with the state(5) to be set 00:28:54.450 [2024-11-17 19:37:52.519710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea6790 (9): Bad file descriptor 00:28:54.450 [2024-11-17 19:37:52.529772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:55.383 19:37:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.383 19:37:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.383 19:37:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.383 19:37:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.383 19:37:53 -- common/autotest_common.sh@10 -- # set +x 00:28:55.383 19:37:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.383 19:37:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.383 [2024-11-17 19:37:53.587722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:56.755 [2024-11-17 19:37:54.611793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:56.755 [2024-11-17 19:37:54.611856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea6790 with addr=10.0.0.2, port=4420 00:28:56.755 [2024-11-17 19:37:54.611886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea6790 is same with the state(5) to be set 00:28:56.755 [2024-11-17 19:37:54.611928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:56.755 [2024-11-17 19:37:54.611948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:56.755 [2024-11-17 19:37:54.611963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:56.755 [2024-11-17 19:37:54.611980] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:56.755 [2024-11-17 19:37:54.612430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea6790 (9): Bad file descriptor 00:28:56.755 [2024-11-17 19:37:54.612474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.755 [2024-11-17 19:37:54.612519] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:56.755 [2024-11-17 19:37:54.612558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.756 [2024-11-17 19:37:54.612583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.756 [2024-11-17 19:37:54.612604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.756 [2024-11-17 19:37:54.612619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.756 [2024-11-17 19:37:54.612635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.756 [2024-11-17 19:37:54.612650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.756 [2024-11-17 19:37:54.612665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.756 [2024-11-17 19:37:54.612700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.756 [2024-11-17 19:37:54.612733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.756 [2024-11-17 19:37:54.612747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.756 [2024-11-17 19:37:54.612761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:56.756 [2024-11-17 19:37:54.612926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5c40 (9): Bad file descriptor 00:28:56.756 [2024-11-17 19:37:54.613942] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:56.756 [2024-11-17 19:37:54.613981] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:56.756 19:37:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.756 19:37:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:56.756 19:37:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:57.689 19:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.689 19:37:55 -- common/autotest_common.sh@10 -- # set +x 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:57.689 19:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:57.689 19:37:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.689 19:37:55 -- common/autotest_common.sh@10 -- # set +x 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:57.689 19:37:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:57.689 19:37:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:58.622 [2024-11-17 19:37:56.663405] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:58.622 [2024-11-17 19:37:56.663431] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:58.622 [2024-11-17 19:37:56.663456] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:58.622 [2024-11-17 19:37:56.750745] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:58.622 19:37:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:58.622 19:37:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.622 19:37:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:58.622 19:37:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.622 19:37:56 -- common/autotest_common.sh@10 -- # set +x 00:28:58.622 19:37:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:58.622 19:37:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:58.622 19:37:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.622 19:37:56 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:58.622 19:37:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:58.622 [2024-11-17 19:37:56.852668] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:58.622 [2024-11-17 19:37:56.852737] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:58.622 [2024-11-17 19:37:56.852782] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:58.622 [2024-11-17 19:37:56.852804] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:58.622 [2024-11-17 19:37:56.852817] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:58.622 [2024-11-17 19:37:56.861414] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1eea180 was disconnected and freed. delete nvme_qpair. 00:28:59.555 19:37:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:59.555 19:37:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.555 19:37:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:59.555 19:37:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.555 19:37:57 -- common/autotest_common.sh@10 -- # set +x 00:28:59.555 19:37:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:59.555 19:37:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:59.555 19:37:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.813 19:37:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:59.813 19:37:57 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:59.813 19:37:57 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1317118 00:28:59.813 19:37:57 -- common/autotest_common.sh@936 -- # '[' -z 1317118 ']' 00:28:59.813 19:37:57 -- common/autotest_common.sh@940 -- # kill -0 1317118 00:28:59.813 19:37:57 -- common/autotest_common.sh@941 -- # uname 00:28:59.813 19:37:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:59.813 19:37:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1317118 00:28:59.813 19:37:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:59.813 19:37:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:59.813 19:37:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1317118' 00:28:59.813 killing process with pid 1317118 00:28:59.813 19:37:57 -- common/autotest_common.sh@955 -- # kill 1317118 00:28:59.813 19:37:57 -- common/autotest_common.sh@960 -- # wait 1317118 00:29:00.069 19:37:58 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:00.069 19:37:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:00.069 19:37:58 -- nvmf/common.sh@116 -- # sync 00:29:00.069 19:37:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:00.069 19:37:58 -- nvmf/common.sh@119 -- # set +e 00:29:00.069 19:37:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:00.069 19:37:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:00.069 rmmod nvme_tcp 00:29:00.069 rmmod nvme_fabrics 00:29:00.069 rmmod nvme_keyring 00:29:00.069 19:37:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:00.069 19:37:58 -- nvmf/common.sh@123 -- # set -e 00:29:00.069 19:37:58 -- nvmf/common.sh@124 -- # return 0 00:29:00.069 19:37:58 -- nvmf/common.sh@477 -- # '[' -n 1316963 ']' 00:29:00.069 19:37:58 -- nvmf/common.sh@478 -- # killprocess 1316963 00:29:00.069 19:37:58 -- common/autotest_common.sh@936 -- # '[' -z 1316963 ']' 00:29:00.069 19:37:58 -- common/autotest_common.sh@940 -- # kill -0 1316963 00:29:00.069 19:37:58 -- common/autotest_common.sh@941 -- # uname 00:29:00.069 19:37:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:00.069 19:37:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1316963 00:29:00.069 19:37:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:00.069 19:37:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:00.069 19:37:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1316963' 00:29:00.069 killing process with pid 1316963 00:29:00.069 19:37:58 -- common/autotest_common.sh@955 -- # kill 1316963 00:29:00.069 19:37:58 -- common/autotest_common.sh@960 -- # wait 1316963 00:29:00.326 19:37:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:00.326 19:37:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:00.326 19:37:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:00.326 19:37:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:00.326 19:37:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:00.326 19:37:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.326 19:37:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.326 19:37:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.225 19:38:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:02.225 00:29:02.225 real 0m18.410s 00:29:02.225 user 0m25.714s 00:29:02.225 sys 0m2.899s 00:29:02.225 19:38:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:02.225 19:38:00 -- common/autotest_common.sh@10 -- # set +x 00:29:02.225 ************************************ 00:29:02.225 END TEST nvmf_discovery_remove_ifc 00:29:02.225 ************************************ 00:29:02.484 19:38:00 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:29:02.484 19:38:00 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:02.484 19:38:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:02.484 19:38:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:02.484 19:38:00 -- common/autotest_common.sh@10 -- # set +x 00:29:02.484 ************************************ 00:29:02.484 START TEST nvmf_digest 00:29:02.484 ************************************ 00:29:02.484 19:38:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:02.484 * Looking for test storage... 00:29:02.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.484 19:38:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:02.484 19:38:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:02.484 19:38:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:02.484 19:38:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:02.484 19:38:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:02.484 19:38:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:02.484 19:38:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:02.484 19:38:00 -- scripts/common.sh@335 -- # IFS=.-: 00:29:02.484 19:38:00 -- scripts/common.sh@335 -- # read -ra ver1 00:29:02.484 19:38:00 -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.484 19:38:00 -- scripts/common.sh@336 -- # read -ra ver2 00:29:02.484 19:38:00 -- scripts/common.sh@337 -- # local 'op=<' 00:29:02.484 19:38:00 -- scripts/common.sh@339 -- # ver1_l=2 00:29:02.484 19:38:00 -- scripts/common.sh@340 -- # ver2_l=1 00:29:02.484 19:38:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:02.484 19:38:00 -- scripts/common.sh@343 -- # case "$op" in 00:29:02.484 19:38:00 -- scripts/common.sh@344 -- # : 1 00:29:02.484 19:38:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:02.484 19:38:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.484 19:38:00 -- scripts/common.sh@364 -- # decimal 1 00:29:02.484 19:38:00 -- scripts/common.sh@352 -- # local d=1 00:29:02.484 19:38:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.484 19:38:00 -- scripts/common.sh@354 -- # echo 1 00:29:02.484 19:38:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:02.484 19:38:00 -- scripts/common.sh@365 -- # decimal 2 00:29:02.484 19:38:00 -- scripts/common.sh@352 -- # local d=2 00:29:02.484 19:38:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.484 19:38:00 -- scripts/common.sh@354 -- # echo 2 00:29:02.484 19:38:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:02.484 19:38:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:02.484 19:38:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:02.484 19:38:00 -- scripts/common.sh@367 -- # return 0 00:29:02.484 19:38:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.484 19:38:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:02.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.484 --rc genhtml_branch_coverage=1 00:29:02.484 --rc genhtml_function_coverage=1 00:29:02.484 --rc genhtml_legend=1 00:29:02.484 --rc geninfo_all_blocks=1 00:29:02.484 --rc geninfo_unexecuted_blocks=1 00:29:02.484 00:29:02.484 ' 00:29:02.484 19:38:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:02.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.484 --rc genhtml_branch_coverage=1 00:29:02.484 --rc genhtml_function_coverage=1 00:29:02.484 --rc genhtml_legend=1 00:29:02.484 --rc geninfo_all_blocks=1 00:29:02.484 --rc geninfo_unexecuted_blocks=1 00:29:02.484 00:29:02.484 ' 00:29:02.484 19:38:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:02.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.484 --rc genhtml_branch_coverage=1 00:29:02.484 --rc genhtml_function_coverage=1 00:29:02.484 --rc genhtml_legend=1 00:29:02.484 --rc geninfo_all_blocks=1 00:29:02.484 --rc geninfo_unexecuted_blocks=1 00:29:02.484 00:29:02.484 ' 00:29:02.484 19:38:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:02.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.485 --rc genhtml_branch_coverage=1 00:29:02.485 --rc genhtml_function_coverage=1 00:29:02.485 --rc genhtml_legend=1 00:29:02.485 --rc geninfo_all_blocks=1 00:29:02.485 --rc geninfo_unexecuted_blocks=1 00:29:02.485 00:29:02.485 ' 00:29:02.485 19:38:00 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.485 19:38:00 -- nvmf/common.sh@7 -- # uname -s 00:29:02.485 19:38:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.485 19:38:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.485 19:38:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.485 19:38:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.485 19:38:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.485 19:38:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.485 19:38:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.485 19:38:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.485 19:38:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.485 19:38:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.485 19:38:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.485 19:38:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.485 19:38:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.485 19:38:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.485 19:38:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.485 19:38:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.485 19:38:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.485 19:38:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.485 19:38:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.485 19:38:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.485 19:38:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.485 19:38:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.485 19:38:00 -- paths/export.sh@5 -- # export PATH 00:29:02.485 19:38:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.485 19:38:00 -- nvmf/common.sh@46 -- # : 0 00:29:02.485 19:38:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:02.485 19:38:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:02.485 19:38:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:02.485 19:38:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.485 19:38:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.485 19:38:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:02.485 19:38:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:02.485 19:38:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:02.485 19:38:00 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:02.485 19:38:00 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:02.485 19:38:00 -- host/digest.sh@16 -- # runtime=2 00:29:02.485 19:38:00 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:29:02.485 19:38:00 -- host/digest.sh@132 -- # nvmftestinit 00:29:02.485 19:38:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:02.485 19:38:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.485 19:38:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:02.485 19:38:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:02.485 19:38:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:02.485 19:38:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.485 19:38:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:02.485 19:38:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.485 19:38:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:02.485 19:38:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:02.485 19:38:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:02.485 19:38:00 -- common/autotest_common.sh@10 -- # set +x 00:29:05.018 19:38:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:05.018 19:38:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:05.018 19:38:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:05.018 19:38:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:05.018 19:38:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:05.018 19:38:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:05.018 19:38:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:05.018 19:38:02 -- nvmf/common.sh@294 -- # net_devs=() 00:29:05.018 19:38:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:05.018 19:38:02 -- nvmf/common.sh@295 -- # e810=() 00:29:05.018 19:38:02 -- nvmf/common.sh@295 -- # local -ga e810 00:29:05.018 19:38:02 -- nvmf/common.sh@296 -- # x722=() 00:29:05.018 19:38:02 -- nvmf/common.sh@296 -- # local -ga x722 00:29:05.018 19:38:02 -- nvmf/common.sh@297 -- # mlx=() 00:29:05.018 19:38:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:05.018 19:38:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.018 19:38:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:05.018 19:38:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:05.018 19:38:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:05.018 19:38:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:05.018 19:38:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:05.018 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:05.018 19:38:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:05.018 19:38:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:05.018 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:05.018 19:38:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:05.018 19:38:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:05.018 19:38:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.018 19:38:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:05.018 19:38:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.018 19:38:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:05.018 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:05.018 19:38:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.018 19:38:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:05.018 19:38:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.018 19:38:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:05.018 19:38:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.018 19:38:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:05.018 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:05.018 19:38:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.018 19:38:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:05.018 19:38:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:05.018 19:38:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:05.018 19:38:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:05.018 19:38:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.018 19:38:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.018 19:38:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.018 19:38:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:05.018 19:38:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.018 19:38:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.018 19:38:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:05.018 19:38:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.018 19:38:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.018 19:38:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:05.018 19:38:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:05.018 19:38:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.018 19:38:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.018 19:38:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.018 19:38:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.018 19:38:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:05.019 19:38:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.019 19:38:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.019 19:38:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.019 19:38:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:05.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:29:05.019 00:29:05.019 --- 10.0.0.2 ping statistics --- 00:29:05.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.019 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:29:05.019 19:38:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:29:05.019 00:29:05.019 --- 10.0.0.1 ping statistics --- 00:29:05.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.019 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:05.019 19:38:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.019 19:38:02 -- nvmf/common.sh@410 -- # return 0 00:29:05.019 19:38:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:05.019 19:38:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.019 19:38:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:05.019 19:38:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:05.019 19:38:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.019 19:38:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:05.019 19:38:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:05.019 19:38:02 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:05.019 19:38:02 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:29:05.019 19:38:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:05.019 19:38:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:05.019 19:38:02 -- common/autotest_common.sh@10 -- # set +x 00:29:05.019 ************************************ 00:29:05.019 START TEST nvmf_digest_clean 00:29:05.019 ************************************ 00:29:05.019 19:38:02 -- common/autotest_common.sh@1114 -- # run_digest 00:29:05.019 19:38:02 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:29:05.019 19:38:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:05.019 19:38:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:05.019 19:38:02 -- common/autotest_common.sh@10 -- # set +x 00:29:05.019 19:38:02 -- nvmf/common.sh@469 -- # nvmfpid=1320642 00:29:05.019 19:38:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:05.019 19:38:02 -- nvmf/common.sh@470 -- # waitforlisten 1320642 00:29:05.019 19:38:02 -- common/autotest_common.sh@829 -- # '[' -z 1320642 ']' 00:29:05.019 19:38:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.019 19:38:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.019 19:38:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.019 19:38:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.019 19:38:02 -- common/autotest_common.sh@10 -- # set +x 00:29:05.019 [2024-11-17 19:38:02.918160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:05.019 [2024-11-17 19:38:02.918233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.019 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.019 [2024-11-17 19:38:02.981899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.019 [2024-11-17 19:38:03.064457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:05.019 [2024-11-17 19:38:03.064606] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.019 [2024-11-17 19:38:03.064623] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.019 [2024-11-17 19:38:03.064636] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.019 [2024-11-17 19:38:03.064687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.019 19:38:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.019 19:38:03 -- common/autotest_common.sh@862 -- # return 0 00:29:05.019 19:38:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:05.019 19:38:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:05.019 19:38:03 -- common/autotest_common.sh@10 -- # set +x 00:29:05.019 19:38:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.019 19:38:03 -- host/digest.sh@120 -- # common_target_config 00:29:05.019 19:38:03 -- host/digest.sh@43 -- # rpc_cmd 00:29:05.019 19:38:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.019 19:38:03 -- common/autotest_common.sh@10 -- # set +x 00:29:05.019 null0 00:29:05.019 [2024-11-17 19:38:03.260636] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.278 [2024-11-17 19:38:03.284920] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.278 19:38:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.278 19:38:03 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:29:05.278 19:38:03 -- host/digest.sh@77 -- # local rw bs qd 00:29:05.278 19:38:03 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:05.278 19:38:03 -- host/digest.sh@80 -- # rw=randread 00:29:05.278 19:38:03 -- host/digest.sh@80 -- # bs=4096 00:29:05.278 19:38:03 -- host/digest.sh@80 -- # qd=128 00:29:05.278 19:38:03 -- host/digest.sh@82 -- # bperfpid=1320787 00:29:05.278 19:38:03 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:05.278 19:38:03 -- host/digest.sh@83 -- # waitforlisten 1320787 /var/tmp/bperf.sock 00:29:05.278 19:38:03 -- common/autotest_common.sh@829 -- # '[' -z 1320787 ']' 00:29:05.278 19:38:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.278 19:38:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.278 19:38:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.278 19:38:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.278 19:38:03 -- common/autotest_common.sh@10 -- # set +x 00:29:05.278 [2024-11-17 19:38:03.330925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:05.278 [2024-11-17 19:38:03.331001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320787 ] 00:29:05.278 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.278 [2024-11-17 19:38:03.391317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.278 [2024-11-17 19:38:03.480499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.581 19:38:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.581 19:38:03 -- common/autotest_common.sh@862 -- # return 0 00:29:05.581 19:38:03 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:05.581 19:38:03 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:05.581 19:38:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:05.862 19:38:03 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.862 19:38:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.120 nvme0n1 00:29:06.120 19:38:04 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:06.120 19:38:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.378 Running I/O for 2 seconds... 00:29:08.276 00:29:08.276 Latency(us) 00:29:08.276 [2024-11-17T18:38:06.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.276 [2024-11-17T18:38:06.543Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:08.276 nvme0n1 : 2.00 16618.62 64.92 0.00 0.00 7693.06 3495.25 15825.73 00:29:08.276 [2024-11-17T18:38:06.543Z] =================================================================================================================== 00:29:08.276 [2024-11-17T18:38:06.543Z] Total : 16618.62 64.92 0.00 0.00 7693.06 3495.25 15825.73 00:29:08.276 0 00:29:08.276 19:38:06 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:08.276 19:38:06 -- host/digest.sh@92 -- # get_accel_stats 00:29:08.276 19:38:06 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:08.276 19:38:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:08.276 19:38:06 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:08.276 | select(.opcode=="crc32c") 00:29:08.276 | "\(.module_name) \(.executed)"' 00:29:08.534 19:38:06 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:08.534 19:38:06 -- host/digest.sh@93 -- # exp_module=software 00:29:08.534 19:38:06 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:08.534 19:38:06 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:08.534 19:38:06 -- host/digest.sh@97 -- # killprocess 1320787 00:29:08.534 19:38:06 -- common/autotest_common.sh@936 -- # '[' -z 1320787 ']' 00:29:08.535 19:38:06 -- common/autotest_common.sh@940 -- # kill -0 1320787 00:29:08.535 19:38:06 -- common/autotest_common.sh@941 -- # uname 00:29:08.535 19:38:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:08.535 19:38:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1320787 00:29:08.535 19:38:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:08.535 19:38:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:08.535 19:38:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1320787' 00:29:08.535 killing process with pid 1320787 00:29:08.535 19:38:06 -- common/autotest_common.sh@955 -- # kill 1320787 00:29:08.535 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.535 00:29:08.535 Latency(us) 00:29:08.535 [2024-11-17T18:38:06.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.535 [2024-11-17T18:38:06.802Z] =================================================================================================================== 00:29:08.535 [2024-11-17T18:38:06.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.535 19:38:06 -- common/autotest_common.sh@960 -- # wait 1320787 00:29:08.793 19:38:06 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:29:08.793 19:38:06 -- host/digest.sh@77 -- # local rw bs qd 00:29:08.793 19:38:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:08.793 19:38:06 -- host/digest.sh@80 -- # rw=randread 00:29:08.793 19:38:06 -- host/digest.sh@80 -- # bs=131072 00:29:08.793 19:38:06 -- host/digest.sh@80 -- # qd=16 00:29:08.793 19:38:06 -- host/digest.sh@82 -- # bperfpid=1321208 00:29:08.793 19:38:06 -- host/digest.sh@83 -- # waitforlisten 1321208 /var/tmp/bperf.sock 00:29:08.793 19:38:06 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:08.793 19:38:06 -- common/autotest_common.sh@829 -- # '[' -z 1321208 ']' 00:29:08.793 19:38:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.793 19:38:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.793 19:38:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.793 19:38:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.793 19:38:06 -- common/autotest_common.sh@10 -- # set +x 00:29:08.793 [2024-11-17 19:38:06.984063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:08.793 [2024-11-17 19:38:06.984146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321208 ] 00:29:08.793 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.793 Zero copy mechanism will not be used. 00:29:08.793 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.793 [2024-11-17 19:38:07.044755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.051 [2024-11-17 19:38:07.130828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.051 19:38:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:09.051 19:38:07 -- common/autotest_common.sh@862 -- # return 0 00:29:09.051 19:38:07 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:09.051 19:38:07 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:09.051 19:38:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:09.309 19:38:07 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.309 19:38:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.874 nvme0n1 00:29:09.874 19:38:07 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:09.874 19:38:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.874 Zero copy mechanism will not be used. 00:29:09.874 Running I/O for 2 seconds... 00:29:11.770 00:29:11.770 Latency(us) 00:29:11.770 [2024-11-17T18:38:10.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.770 [2024-11-17T18:38:10.037Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:11.770 nvme0n1 : 2.00 6157.69 769.71 0.00 0.00 2594.48 1086.20 4538.97 00:29:11.770 [2024-11-17T18:38:10.037Z] =================================================================================================================== 00:29:11.770 [2024-11-17T18:38:10.037Z] Total : 6157.69 769.71 0.00 0.00 2594.48 1086.20 4538.97 00:29:11.770 0 00:29:11.770 19:38:10 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:11.770 19:38:10 -- host/digest.sh@92 -- # get_accel_stats 00:29:11.770 19:38:10 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:11.770 19:38:10 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:11.770 | select(.opcode=="crc32c") 00:29:11.770 | "\(.module_name) \(.executed)"' 00:29:11.770 19:38:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:12.028 19:38:10 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:12.028 19:38:10 -- host/digest.sh@93 -- # exp_module=software 00:29:12.028 19:38:10 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:12.028 19:38:10 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:12.028 19:38:10 -- host/digest.sh@97 -- # killprocess 1321208 00:29:12.028 19:38:10 -- common/autotest_common.sh@936 -- # '[' -z 1321208 ']' 00:29:12.028 19:38:10 -- common/autotest_common.sh@940 -- # kill -0 1321208 00:29:12.028 19:38:10 -- common/autotest_common.sh@941 -- # uname 00:29:12.287 19:38:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:12.287 19:38:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1321208 00:29:12.287 19:38:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:12.287 19:38:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:12.287 19:38:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1321208' 00:29:12.287 killing process with pid 1321208 00:29:12.287 19:38:10 -- common/autotest_common.sh@955 -- # kill 1321208 00:29:12.287 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.287 00:29:12.287 Latency(us) 00:29:12.287 [2024-11-17T18:38:10.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.287 [2024-11-17T18:38:10.554Z] =================================================================================================================== 00:29:12.287 [2024-11-17T18:38:10.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.287 19:38:10 -- common/autotest_common.sh@960 -- # wait 1321208 00:29:12.287 19:38:10 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:12.287 19:38:10 -- host/digest.sh@77 -- # local rw bs qd 00:29:12.287 19:38:10 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:12.287 19:38:10 -- host/digest.sh@80 -- # rw=randwrite 00:29:12.287 19:38:10 -- host/digest.sh@80 -- # bs=4096 00:29:12.287 19:38:10 -- host/digest.sh@80 -- # qd=128 00:29:12.287 19:38:10 -- host/digest.sh@82 -- # bperfpid=1321629 00:29:12.287 19:38:10 -- host/digest.sh@83 -- # waitforlisten 1321629 /var/tmp/bperf.sock 00:29:12.287 19:38:10 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:12.287 19:38:10 -- common/autotest_common.sh@829 -- # '[' -z 1321629 ']' 00:29:12.287 19:38:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.287 19:38:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:12.287 19:38:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.287 19:38:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:12.287 19:38:10 -- common/autotest_common.sh@10 -- # set +x 00:29:12.545 [2024-11-17 19:38:10.573228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:12.545 [2024-11-17 19:38:10.573309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321629 ] 00:29:12.545 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.545 [2024-11-17 19:38:10.633194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.545 [2024-11-17 19:38:10.721013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.545 19:38:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:12.545 19:38:10 -- common/autotest_common.sh@862 -- # return 0 00:29:12.545 19:38:10 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:12.545 19:38:10 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:12.545 19:38:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:13.112 19:38:11 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.112 19:38:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.370 nvme0n1 00:29:13.371 19:38:11 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:13.371 19:38:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.629 Running I/O for 2 seconds... 00:29:15.525 00:29:15.525 Latency(us) 00:29:15.525 [2024-11-17T18:38:13.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.525 [2024-11-17T18:38:13.792Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.525 nvme0n1 : 2.01 20347.28 79.48 0.00 0.00 6276.98 2378.71 16117.00 00:29:15.525 [2024-11-17T18:38:13.792Z] =================================================================================================================== 00:29:15.525 [2024-11-17T18:38:13.792Z] Total : 20347.28 79.48 0.00 0.00 6276.98 2378.71 16117.00 00:29:15.525 0 00:29:15.525 19:38:13 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:15.525 19:38:13 -- host/digest.sh@92 -- # get_accel_stats 00:29:15.525 19:38:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:15.525 19:38:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:15.525 19:38:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:15.525 | select(.opcode=="crc32c") 00:29:15.525 | "\(.module_name) \(.executed)"' 00:29:15.782 19:38:13 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:15.782 19:38:13 -- host/digest.sh@93 -- # exp_module=software 00:29:15.782 19:38:13 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:15.782 19:38:13 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:15.782 19:38:13 -- host/digest.sh@97 -- # killprocess 1321629 00:29:15.782 19:38:13 -- common/autotest_common.sh@936 -- # '[' -z 1321629 ']' 00:29:15.782 19:38:13 -- common/autotest_common.sh@940 -- # kill -0 1321629 00:29:15.782 19:38:13 -- common/autotest_common.sh@941 -- # uname 00:29:15.782 19:38:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:15.782 19:38:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1321629 00:29:15.782 19:38:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:15.782 19:38:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:15.782 19:38:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1321629' 00:29:15.782 killing process with pid 1321629 00:29:15.782 19:38:13 -- common/autotest_common.sh@955 -- # kill 1321629 00:29:15.782 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.782 00:29:15.782 Latency(us) 00:29:15.782 [2024-11-17T18:38:14.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.782 [2024-11-17T18:38:14.049Z] =================================================================================================================== 00:29:15.782 [2024-11-17T18:38:14.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.782 19:38:13 -- common/autotest_common.sh@960 -- # wait 1321629 00:29:16.040 19:38:14 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:16.040 19:38:14 -- host/digest.sh@77 -- # local rw bs qd 00:29:16.040 19:38:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:16.040 19:38:14 -- host/digest.sh@80 -- # rw=randwrite 00:29:16.040 19:38:14 -- host/digest.sh@80 -- # bs=131072 00:29:16.040 19:38:14 -- host/digest.sh@80 -- # qd=16 00:29:16.040 19:38:14 -- host/digest.sh@82 -- # bperfpid=1322049 00:29:16.040 19:38:14 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:16.040 19:38:14 -- host/digest.sh@83 -- # waitforlisten 1322049 /var/tmp/bperf.sock 00:29:16.040 19:38:14 -- common/autotest_common.sh@829 -- # '[' -z 1322049 ']' 00:29:16.040 19:38:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:16.040 19:38:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:16.040 19:38:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:16.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:16.040 19:38:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:16.040 19:38:14 -- common/autotest_common.sh@10 -- # set +x 00:29:16.040 [2024-11-17 19:38:14.208473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:16.040 [2024-11-17 19:38:14.208555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322049 ] 00:29:16.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:16.040 Zero copy mechanism will not be used. 00:29:16.040 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.040 [2024-11-17 19:38:14.271089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.298 [2024-11-17 19:38:14.361720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.298 19:38:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:16.298 19:38:14 -- common/autotest_common.sh@862 -- # return 0 00:29:16.298 19:38:14 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:16.298 19:38:14 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:16.298 19:38:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:16.557 19:38:14 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.557 19:38:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.124 nvme0n1 00:29:17.124 19:38:15 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:17.124 19:38:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:17.124 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:17.124 Zero copy mechanism will not be used. 00:29:17.124 Running I/O for 2 seconds... 00:29:19.652 00:29:19.652 Latency(us) 00:29:19.652 [2024-11-17T18:38:17.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.652 [2024-11-17T18:38:17.919Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:19.652 nvme0n1 : 2.00 6380.57 797.57 0.00 0.00 2500.46 1881.13 5582.70 00:29:19.652 [2024-11-17T18:38:17.919Z] =================================================================================================================== 00:29:19.652 [2024-11-17T18:38:17.919Z] Total : 6380.57 797.57 0.00 0.00 2500.46 1881.13 5582.70 00:29:19.652 0 00:29:19.652 19:38:17 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:19.652 19:38:17 -- host/digest.sh@92 -- # get_accel_stats 00:29:19.652 19:38:17 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:19.652 19:38:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:19.652 19:38:17 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:19.652 | select(.opcode=="crc32c") 00:29:19.652 | "\(.module_name) \(.executed)"' 00:29:19.652 19:38:17 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:19.652 19:38:17 -- host/digest.sh@93 -- # exp_module=software 00:29:19.652 19:38:17 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:19.652 19:38:17 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:19.652 19:38:17 -- host/digest.sh@97 -- # killprocess 1322049 00:29:19.652 19:38:17 -- common/autotest_common.sh@936 -- # '[' -z 1322049 ']' 00:29:19.652 19:38:17 -- common/autotest_common.sh@940 -- # kill -0 1322049 00:29:19.652 19:38:17 -- common/autotest_common.sh@941 -- # uname 00:29:19.652 19:38:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:19.652 19:38:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1322049 00:29:19.652 19:38:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:19.652 19:38:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:19.652 19:38:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1322049' 00:29:19.652 killing process with pid 1322049 00:29:19.652 19:38:17 -- common/autotest_common.sh@955 -- # kill 1322049 00:29:19.652 Received shutdown signal, test time was about 2.000000 seconds 00:29:19.652 00:29:19.652 Latency(us) 00:29:19.652 [2024-11-17T18:38:17.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.652 [2024-11-17T18:38:17.919Z] =================================================================================================================== 00:29:19.652 [2024-11-17T18:38:17.919Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.652 19:38:17 -- common/autotest_common.sh@960 -- # wait 1322049 00:29:19.652 19:38:17 -- host/digest.sh@126 -- # killprocess 1320642 00:29:19.652 19:38:17 -- common/autotest_common.sh@936 -- # '[' -z 1320642 ']' 00:29:19.652 19:38:17 -- common/autotest_common.sh@940 -- # kill -0 1320642 00:29:19.652 19:38:17 -- common/autotest_common.sh@941 -- # uname 00:29:19.652 19:38:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:19.652 19:38:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1320642 00:29:19.652 19:38:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:19.652 19:38:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:19.652 19:38:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1320642' 00:29:19.652 killing process with pid 1320642 00:29:19.652 19:38:17 -- common/autotest_common.sh@955 -- # kill 1320642 00:29:19.652 19:38:17 -- common/autotest_common.sh@960 -- # wait 1320642 00:29:19.911 00:29:19.911 real 0m15.254s 00:29:19.911 user 0m30.297s 00:29:19.911 sys 0m4.341s 00:29:19.911 19:38:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:19.911 19:38:18 -- common/autotest_common.sh@10 -- # set +x 00:29:19.911 ************************************ 00:29:19.911 END TEST nvmf_digest_clean 00:29:19.911 ************************************ 00:29:19.911 19:38:18 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:19.911 19:38:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:19.911 19:38:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:19.911 19:38:18 -- common/autotest_common.sh@10 -- # set +x 00:29:19.911 ************************************ 00:29:19.911 START TEST nvmf_digest_error 00:29:19.911 ************************************ 00:29:19.911 19:38:18 -- common/autotest_common.sh@1114 -- # run_digest_error 00:29:19.911 19:38:18 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:19.911 19:38:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:19.911 19:38:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:19.911 19:38:18 -- common/autotest_common.sh@10 -- # set +x 00:29:19.911 19:38:18 -- nvmf/common.sh@469 -- # nvmfpid=1322615 00:29:19.911 19:38:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:19.911 19:38:18 -- nvmf/common.sh@470 -- # waitforlisten 1322615 00:29:19.911 19:38:18 -- common/autotest_common.sh@829 -- # '[' -z 1322615 ']' 00:29:19.911 19:38:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.911 19:38:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.911 19:38:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.911 19:38:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.911 19:38:18 -- common/autotest_common.sh@10 -- # set +x 00:29:20.169 [2024-11-17 19:38:18.200405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:20.169 [2024-11-17 19:38:18.200496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.169 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.169 [2024-11-17 19:38:18.267386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.169 [2024-11-17 19:38:18.353755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:20.169 [2024-11-17 19:38:18.353922] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.169 [2024-11-17 19:38:18.353942] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.169 [2024-11-17 19:38:18.353967] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.169 [2024-11-17 19:38:18.353999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.169 19:38:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.169 19:38:18 -- common/autotest_common.sh@862 -- # return 0 00:29:20.169 19:38:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:20.169 19:38:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:20.169 19:38:18 -- common/autotest_common.sh@10 -- # set +x 00:29:20.428 19:38:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.428 19:38:18 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:20.428 19:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.428 19:38:18 -- common/autotest_common.sh@10 -- # set +x 00:29:20.428 [2024-11-17 19:38:18.446627] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:20.428 19:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.428 19:38:18 -- host/digest.sh@104 -- # common_target_config 00:29:20.428 19:38:18 -- host/digest.sh@43 -- # rpc_cmd 00:29:20.428 19:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.428 19:38:18 -- common/autotest_common.sh@10 -- # set +x 00:29:20.428 null0 00:29:20.428 [2024-11-17 19:38:18.566392] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.428 [2024-11-17 19:38:18.590622] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.428 19:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.428 19:38:18 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:20.428 19:38:18 -- host/digest.sh@54 -- # local rw bs qd 00:29:20.428 19:38:18 -- host/digest.sh@56 -- # rw=randread 00:29:20.428 19:38:18 -- host/digest.sh@56 -- # bs=4096 00:29:20.428 19:38:18 -- host/digest.sh@56 -- # qd=128 00:29:20.428 19:38:18 -- host/digest.sh@58 -- # bperfpid=1322640 00:29:20.428 19:38:18 -- host/digest.sh@60 -- # waitforlisten 1322640 /var/tmp/bperf.sock 00:29:20.428 19:38:18 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:20.428 19:38:18 -- common/autotest_common.sh@829 -- # '[' -z 1322640 ']' 00:29:20.428 19:38:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.428 19:38:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.428 19:38:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.428 19:38:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.428 19:38:18 -- common/autotest_common.sh@10 -- # set +x 00:29:20.428 [2024-11-17 19:38:18.635452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:20.428 [2024-11-17 19:38:18.635524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322640 ] 00:29:20.428 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.686 [2024-11-17 19:38:18.697476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.686 [2024-11-17 19:38:18.787774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.617 19:38:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.617 19:38:19 -- common/autotest_common.sh@862 -- # return 0 00:29:21.617 19:38:19 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.617 19:38:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.617 19:38:19 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:21.617 19:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.617 19:38:19 -- common/autotest_common.sh@10 -- # set +x 00:29:21.617 19:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.617 19:38:19 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.617 19:38:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.183 nvme0n1 00:29:22.183 19:38:20 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:22.183 19:38:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.183 19:38:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.183 19:38:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.183 19:38:20 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:22.183 19:38:20 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.183 Running I/O for 2 seconds... 00:29:22.183 [2024-11-17 19:38:20.436767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.183 [2024-11-17 19:38:20.436825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.183 [2024-11-17 19:38:20.436846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.454010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.454059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.454079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.469506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.469541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.469560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.487957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.487986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.488018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.503721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.503752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.503769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.515427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.515462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.515481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.532479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.532513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.532531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.549307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.549341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.549360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.566006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.566039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.566058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.582927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.582974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.582993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.599961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.600003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.600023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.617021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.617055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.617074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.633993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.634026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.634045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.651030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.651077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.651096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.668263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.668300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.668319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.442 [2024-11-17 19:38:20.685561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.442 [2024-11-17 19:38:20.685594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.442 [2024-11-17 19:38:20.685613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.443 [2024-11-17 19:38:20.702830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.443 [2024-11-17 19:38:20.702858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.443 [2024-11-17 19:38:20.702893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.720115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.720150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.720170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.737400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.737435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.737454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.754615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.754649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.754668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.771689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.771735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.788538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.788572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.788591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.805669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.805725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.805741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.822795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.822823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.822854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.839436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.839469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.839488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.856559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.856600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.856620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.873594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.873630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.873649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.890820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.890847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.890878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.907775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.907804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.907835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.924904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.701 [2024-11-17 19:38:20.924933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.701 [2024-11-17 19:38:20.924964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.701 [2024-11-17 19:38:20.941510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.702 [2024-11-17 19:38:20.941545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.702 [2024-11-17 19:38:20.941564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.702 [2024-11-17 19:38:20.958433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.702 [2024-11-17 19:38:20.958467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.702 [2024-11-17 19:38:20.958486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:20.975406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:20.975441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:20.975460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:20.992444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:20.992480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:20.992499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.009334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:21.009368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:21.009387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.025531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:21.025565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:21.025583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.042847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:21.042875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:21.042905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.059607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:21.059642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:21.059660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.076821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:21.076849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:21.076878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.094054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:21.094087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:21.094105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.111129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:21.111163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:21.111182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.128120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:21.128154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:21.128173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.145024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.960 [2024-11-17 19:38:21.145057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.960 [2024-11-17 19:38:21.145082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.960 [2024-11-17 19:38:21.162087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.961 [2024-11-17 19:38:21.162121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.961 [2024-11-17 19:38:21.162139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.961 [2024-11-17 19:38:21.179267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.961 [2024-11-17 19:38:21.179301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.961 [2024-11-17 19:38:21.179319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.961 [2024-11-17 19:38:21.196373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.961 [2024-11-17 19:38:21.196406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.961 [2024-11-17 19:38:21.196424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.961 [2024-11-17 19:38:21.213429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:22.961 [2024-11-17 19:38:21.213462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.961 [2024-11-17 19:38:21.213480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.230748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.230777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.230808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.247773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.247801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.247831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.264787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.264814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.264845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.281840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.281867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.281899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.298972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.299027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.299047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.316123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.316157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.316176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.333198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.333241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.333260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.350414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.350449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.350468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.367640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.367692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.367727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.384501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.384543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.384562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.402044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.219 [2024-11-17 19:38:21.402078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.219 [2024-11-17 19:38:21.402103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.219 [2024-11-17 19:38:21.419071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.220 [2024-11-17 19:38:21.419118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.220 [2024-11-17 19:38:21.419136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.220 [2024-11-17 19:38:21.436031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.220 [2024-11-17 19:38:21.436065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.220 [2024-11-17 19:38:21.436083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.220 [2024-11-17 19:38:21.453227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.220 [2024-11-17 19:38:21.453262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.220 [2024-11-17 19:38:21.453280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.220 [2024-11-17 19:38:21.470294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.220 [2024-11-17 19:38:21.470328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.220 [2024-11-17 19:38:21.470346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.487413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.487448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.487467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.504454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.504488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.504506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.521564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.521598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.521617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.538608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.538642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.538660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.555367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.555400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.555418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.572302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.572335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.572354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.589350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.589384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.589409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.606365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.606399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.606417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.623282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.623316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.623335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.640425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.640459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.640477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.657390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.657425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.657443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.674292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.674326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.674344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.691387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.691420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.691439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.708419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.708452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.708470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.725497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.478 [2024-11-17 19:38:21.725531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.478 [2024-11-17 19:38:21.725549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.478 [2024-11-17 19:38:21.742793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.479 [2024-11-17 19:38:21.742830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.479 [2024-11-17 19:38:21.742848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.760128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.760162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.760181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.777192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.777225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.777244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.792658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.792716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.792734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.804852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.804879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.804895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.821076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.821109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.821127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.838485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.838519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.838537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.855351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.855385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.855404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.872835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.872863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.872893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.889879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.889907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.889937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.906744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.906771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.906802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.924048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.924093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.924111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.941077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.941110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.941128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.958300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.958334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.958352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.975572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.975607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.737 [2024-11-17 19:38:21.975625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.737 [2024-11-17 19:38:21.992734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.737 [2024-11-17 19:38:21.992762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.738 [2024-11-17 19:38:21.992792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.010076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.010110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.010128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.027751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.027783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.027814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.043278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.043312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.043330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.060125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.060159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.060177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.076913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.076943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.076960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.094157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.094191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.094209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.112579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.112613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.112632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.129619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.129653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.129671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.146999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.147025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.147059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.163861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.163889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.163919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.180944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.180986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.181006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.198100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.198134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.198152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.211932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.211962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.211978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.230248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.996 [2024-11-17 19:38:22.230282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.996 [2024-11-17 19:38:22.230301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.996 [2024-11-17 19:38:22.245642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.997 [2024-11-17 19:38:22.245685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.997 [2024-11-17 19:38:22.245707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.997 [2024-11-17 19:38:22.259424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:23.997 [2024-11-17 19:38:22.259458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.997 [2024-11-17 19:38:22.259477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 [2024-11-17 19:38:22.271360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:24.255 [2024-11-17 19:38:22.271394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.255 [2024-11-17 19:38:22.271413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 [2024-11-17 19:38:22.295691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:24.255 [2024-11-17 19:38:22.295737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.255 [2024-11-17 19:38:22.295752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 [2024-11-17 19:38:22.311232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:24.255 [2024-11-17 19:38:22.311266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.255 [2024-11-17 19:38:22.311291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 [2024-11-17 19:38:22.322576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:24.255 [2024-11-17 19:38:22.322609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.255 [2024-11-17 19:38:22.322628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 [2024-11-17 19:38:22.339447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:24.255 [2024-11-17 19:38:22.339481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.255 [2024-11-17 19:38:22.339500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 [2024-11-17 19:38:22.356029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:24.255 [2024-11-17 19:38:22.356062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.255 [2024-11-17 19:38:22.356081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 [2024-11-17 19:38:22.373261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:24.255 [2024-11-17 19:38:22.373295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.255 [2024-11-17 19:38:22.373314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 [2024-11-17 19:38:22.390087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:24.255 [2024-11-17 19:38:22.390120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.255 [2024-11-17 19:38:22.390139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 [2024-11-17 19:38:22.406989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c6400) 00:29:24.255 [2024-11-17 19:38:22.407035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.255 [2024-11-17 19:38:22.407053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.255 00:29:24.255 Latency(us) 00:29:24.255 [2024-11-17T18:38:22.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.255 [2024-11-17T18:38:22.522Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:24.255 nvme0n1 : 2.01 15066.80 58.85 0.00 0.00 8488.02 3786.52 31457.28 00:29:24.255 [2024-11-17T18:38:22.522Z] =================================================================================================================== 00:29:24.255 [2024-11-17T18:38:22.522Z] Total : 15066.80 58.85 0.00 0.00 8488.02 3786.52 31457.28 00:29:24.255 0 00:29:24.255 19:38:22 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:24.255 19:38:22 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:24.255 19:38:22 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:24.255 | .driver_specific 00:29:24.255 | .nvme_error 00:29:24.255 | .status_code 00:29:24.255 | .command_transient_transport_error' 00:29:24.255 19:38:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.514 19:38:22 -- host/digest.sh@71 -- # (( 118 > 0 )) 00:29:24.514 19:38:22 -- host/digest.sh@73 -- # killprocess 1322640 00:29:24.514 19:38:22 -- common/autotest_common.sh@936 -- # '[' -z 1322640 ']' 00:29:24.514 19:38:22 -- common/autotest_common.sh@940 -- # kill -0 1322640 00:29:24.514 19:38:22 -- common/autotest_common.sh@941 -- # uname 00:29:24.514 19:38:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:24.514 19:38:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1322640 00:29:24.514 19:38:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:24.514 19:38:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:24.514 19:38:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1322640' 00:29:24.514 killing process with pid 1322640 00:29:24.514 19:38:22 -- common/autotest_common.sh@955 -- # kill 1322640 00:29:24.514 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.514 00:29:24.514 Latency(us) 00:29:24.514 [2024-11-17T18:38:22.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.514 [2024-11-17T18:38:22.781Z] =================================================================================================================== 00:29:24.514 [2024-11-17T18:38:22.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.514 19:38:22 -- common/autotest_common.sh@960 -- # wait 1322640 00:29:24.772 19:38:22 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:24.772 19:38:22 -- host/digest.sh@54 -- # local rw bs qd 00:29:24.772 19:38:22 -- host/digest.sh@56 -- # rw=randread 00:29:24.772 19:38:22 -- host/digest.sh@56 -- # bs=131072 00:29:24.772 19:38:22 -- host/digest.sh@56 -- # qd=16 00:29:24.772 19:38:22 -- host/digest.sh@58 -- # bperfpid=1323195 00:29:24.772 19:38:22 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:24.772 19:38:22 -- host/digest.sh@60 -- # waitforlisten 1323195 /var/tmp/bperf.sock 00:29:24.772 19:38:22 -- common/autotest_common.sh@829 -- # '[' -z 1323195 ']' 00:29:24.772 19:38:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.772 19:38:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.772 19:38:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.772 19:38:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.772 19:38:22 -- common/autotest_common.sh@10 -- # set +x 00:29:24.772 [2024-11-17 19:38:22.992486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:24.772 [2024-11-17 19:38:22.992568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323195 ] 00:29:24.772 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:24.772 Zero copy mechanism will not be used. 00:29:24.772 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.030 [2024-11-17 19:38:23.058227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.030 [2024-11-17 19:38:23.141594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.963 19:38:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:25.963 19:38:23 -- common/autotest_common.sh@862 -- # return 0 00:29:25.963 19:38:23 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.963 19:38:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.963 19:38:24 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:25.963 19:38:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.963 19:38:24 -- common/autotest_common.sh@10 -- # set +x 00:29:25.963 19:38:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.963 19:38:24 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.963 19:38:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.530 nvme0n1 00:29:26.530 19:38:24 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:26.530 19:38:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.530 19:38:24 -- common/autotest_common.sh@10 -- # set +x 00:29:26.530 19:38:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.530 19:38:24 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:26.530 19:38:24 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.530 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:26.530 Zero copy mechanism will not be used. 00:29:26.530 Running I/O for 2 seconds... 00:29:26.788 [2024-11-17 19:38:24.799672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.788 [2024-11-17 19:38:24.799755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.788 [2024-11-17 19:38:24.799777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.788 [2024-11-17 19:38:24.805424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.788 [2024-11-17 19:38:24.805456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.788 [2024-11-17 19:38:24.805475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.788 [2024-11-17 19:38:24.812111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.788 [2024-11-17 19:38:24.812146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.812167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.818811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.818842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.818860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.825784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.825816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.825835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.833196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.833230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.833249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.840837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.840882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.840898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.848756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.848787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.848805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.855320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.855354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.855373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.861656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.861700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.861721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.867577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.867623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.867640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.872015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.872065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.872085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.876049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.876079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.876096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.879845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.879874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.879890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.883425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.883453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.883470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.888178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.888212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.888237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.894349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.894383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.894402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.901045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.901088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.901104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.908824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.908855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.908872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.916025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.916071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.916087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.922122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.922156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.922176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.927430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.927464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.927483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.932806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.932852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.932869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.937508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.937541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.937560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.941968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.942007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.942026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.946731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.946763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.946781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.951401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.951434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.951452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.956775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.956811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.956845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.962125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.789 [2024-11-17 19:38:24.962156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.789 [2024-11-17 19:38:24.962173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.789 [2024-11-17 19:38:24.968042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:24.968077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:24.968097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:24.973623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:24.973657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:24.973686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:24.978581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:24.978615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:24.978634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:24.983402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:24.983436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:24.983462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:24.988062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:24.988093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:24.988110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:24.992226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:24.992271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:24.992290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:24.996858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:24.996889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:24.996906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.001756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.001785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.001819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.006026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.006060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.006079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.010260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.010289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.010306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.013887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.013916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.013933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.017682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.017710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.017726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.021723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.021757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.021774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.026355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.026383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.026399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.031178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.031208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.031224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.036663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.036705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.036724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.041416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.041448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.041467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.046203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.046232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.046249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.790 [2024-11-17 19:38:25.050941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:26.790 [2024-11-17 19:38:25.050971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.790 [2024-11-17 19:38:25.050987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.056011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.056045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.056076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.061189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.061223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.061242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.066165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.066195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.066227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.070426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.070457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.070474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.074610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.074640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.074657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.079084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.079114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.079131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.083907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.083951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.083968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.089544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.089574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.089591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.095180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.095211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.095229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.100588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.100619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.100636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.106417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.106451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.106477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.112184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.112215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.112247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.117941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.117972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.117990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.123631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.123684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.123703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.128639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.128670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.049 [2024-11-17 19:38:25.128697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.049 [2024-11-17 19:38:25.132559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.049 [2024-11-17 19:38:25.132588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.132605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.136163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.136193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.136209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.139653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.139689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.139707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.143662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.143701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.143719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.148537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.148586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.148603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.154908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.154937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.154954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.160650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.160687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.160707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.166704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.166734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.166765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.172433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.172463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.172480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.175530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.175563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.175581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.180384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.180417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.180436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.185724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.185753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.185784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.190465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.190508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.190524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.195416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.195449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.195468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.199995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.200028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.200046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.204543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.204571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.204601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.208999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.209032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.209050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.213599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.213630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.213648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.217982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.218014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.218032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.223349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.223381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.223399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.227304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.227337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.227355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.231963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.231992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.232014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.237027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.237059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.237077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.241651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.241693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.241714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.246153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.246180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.246213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.251069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.251102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.251120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.050 [2024-11-17 19:38:25.255998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.050 [2024-11-17 19:38:25.256032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.050 [2024-11-17 19:38:25.256051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.261358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.261386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.261416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.267412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.267446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.267465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.272597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.272629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.272647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.277145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.277177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.277196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.281665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.281704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.281723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.286183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.286217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.286235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.290553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.290584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.290602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.294893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.294921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.294953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.299326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.299358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.299376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.304696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.304742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.304758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.308744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.308773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.308804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.051 [2024-11-17 19:38:25.313517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.051 [2024-11-17 19:38:25.313550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.051 [2024-11-17 19:38:25.313574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.318074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.318101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.318132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.323104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.323137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.323155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.327417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.327449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.327467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.332234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.332266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.332284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.336688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.336719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.336736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.341093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.341125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.341143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.345589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.345620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.345638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.350301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.350333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.350352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.355605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.355644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.355664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.361270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.361303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.361322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.365657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.365693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.365711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.370471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.370500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.370517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.374607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.374637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.374654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.379442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.379476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.379494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.383993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.384023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.309 [2024-11-17 19:38:25.384040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.309 [2024-11-17 19:38:25.388148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.309 [2024-11-17 19:38:25.388177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.388194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.392766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.392796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.392813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.397936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.397968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.397985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.403798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.403830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.403847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.409171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.409207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.409226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.414949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.414996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.415016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.420541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.420574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.420593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.425217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.425250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.425269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.430081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.430115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.430133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.435874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.435906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.435923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.441496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.441544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.441569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.447856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.447901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.447917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.455104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.455134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.455150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.460846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.460877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.460894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.466388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.466422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.466440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.472116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.472149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.472168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.477900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.477930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.477947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.483391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.483424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.483443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.489127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.489174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.489193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.494882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.494918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.494950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.499535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.499566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.499583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.504580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.504610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.504629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.509808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.509839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.509856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.515080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.515129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.515147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.520716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.520746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.520764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.526382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.526415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.526434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.532063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.532097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.310 [2024-11-17 19:38:25.532115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.310 [2024-11-17 19:38:25.537753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.310 [2024-11-17 19:38:25.537798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.311 [2024-11-17 19:38:25.537820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.311 [2024-11-17 19:38:25.543532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.311 [2024-11-17 19:38:25.543567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.311 [2024-11-17 19:38:25.543587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.311 [2024-11-17 19:38:25.549066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.311 [2024-11-17 19:38:25.549115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.311 [2024-11-17 19:38:25.549134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.311 [2024-11-17 19:38:25.554799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.311 [2024-11-17 19:38:25.554830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.311 [2024-11-17 19:38:25.554847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.311 [2024-11-17 19:38:25.560935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.311 [2024-11-17 19:38:25.560988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.311 [2024-11-17 19:38:25.561008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.311 [2024-11-17 19:38:25.566941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.311 [2024-11-17 19:38:25.566990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.311 [2024-11-17 19:38:25.567009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.311 [2024-11-17 19:38:25.572769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.311 [2024-11-17 19:38:25.572800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.311 [2024-11-17 19:38:25.572817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.569 [2024-11-17 19:38:25.578563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.569 [2024-11-17 19:38:25.578597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.569 [2024-11-17 19:38:25.578616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.569 [2024-11-17 19:38:25.584122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.569 [2024-11-17 19:38:25.584157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.569 [2024-11-17 19:38:25.584177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.569 [2024-11-17 19:38:25.590045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.569 [2024-11-17 19:38:25.590086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.569 [2024-11-17 19:38:25.590106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.569 [2024-11-17 19:38:25.594605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.569 [2024-11-17 19:38:25.594638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.569 [2024-11-17 19:38:25.594658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.569 [2024-11-17 19:38:25.599156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.569 [2024-11-17 19:38:25.599190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.569 [2024-11-17 19:38:25.599209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.604736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.604766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.604784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.610225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.610273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.610294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.615250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.615280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.615297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.621594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.621628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.621647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.628004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.628035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.628051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.633325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.633359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.633378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.638458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.638492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.638511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.644221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.644255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.644274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.649792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.649824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.649842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.654582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.654615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.659401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.659435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.659454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.663896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.663925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.663943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.668416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.668445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.668461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.673096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.673125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.673142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.678263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.678292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.678315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.682826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.682855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.682873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.687550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.687582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.687600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.692102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.692132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.692148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.696999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.697028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.697045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.702145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.702175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.702192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.706871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.706901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.570 [2024-11-17 19:38:25.706917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.570 [2024-11-17 19:38:25.711878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.570 [2024-11-17 19:38:25.711908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.711925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.718447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.718477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.718494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.724201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.724254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.724274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.729765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.729796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.729813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.734924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.734953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.734971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.739793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.739823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.739839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.744876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.744906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.744922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.750155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.750189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.750208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.755470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.755503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.755521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.760901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.760932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.760949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.766280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.766313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.766332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.770884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.770913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.770930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.775778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.775822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.775838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.780768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.780798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.780815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.785430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.785462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.785481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.790224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.790258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.790278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.795293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.795326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.795345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.800090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.800122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.800141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.804607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.804639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.804657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.809035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.809063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.809086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.813622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.813656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.813683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.818532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.818565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.818584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.823488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.823521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.823540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.829143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.829178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.829197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.571 [2024-11-17 19:38:25.833811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.571 [2024-11-17 19:38:25.833842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.571 [2024-11-17 19:38:25.833860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.839 [2024-11-17 19:38:25.837372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.839 [2024-11-17 19:38:25.837405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-11-17 19:38:25.837424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.839 [2024-11-17 19:38:25.843460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.839 [2024-11-17 19:38:25.843496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-11-17 19:38:25.843515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.839 [2024-11-17 19:38:25.849626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.839 [2024-11-17 19:38:25.849660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-11-17 19:38:25.849690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.839 [2024-11-17 19:38:25.854335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.839 [2024-11-17 19:38:25.854369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-11-17 19:38:25.854387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.839 [2024-11-17 19:38:25.859661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.839 [2024-11-17 19:38:25.859704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-11-17 19:38:25.859724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.839 [2024-11-17 19:38:25.864818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.839 [2024-11-17 19:38:25.864848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-11-17 19:38:25.864865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.839 [2024-11-17 19:38:25.871075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.871111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.871130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.876251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.876284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.876303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.883491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.883526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.883545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.888932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.888961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.888978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.896787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.896817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.896849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.904920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.904950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.904989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.911799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.911844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.911861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.919097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.919131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.919150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.926437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.926471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.926490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.933782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.933813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.933831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.941150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.941178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.941209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.948837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.948880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.948896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.956420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.956466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.956483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.963924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.963954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.963987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.971574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.971615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.971635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.978890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.978922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.978940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.986459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.986493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.986512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:25.993781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:25.993811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:25.993828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.000998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.001032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.001050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.007626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.007660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.007688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.013278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.013309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.013325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.018537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.018567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.018584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.023178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.023211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.023230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.027874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.027918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.027934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.032633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.032666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.032695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.037205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.037248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.037263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.042088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.042121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.042139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.046780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.046823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.046838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.051538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.051566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.051599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.056233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.056265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.056283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.060952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.060985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.061003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.066002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.066037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.066086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-11-17 19:38:26.071909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.840 [2024-11-17 19:38:26.071939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-11-17 19:38:26.071973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.841 [2024-11-17 19:38:26.076882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.841 [2024-11-17 19:38:26.076928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.841 [2024-11-17 19:38:26.076944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.841 [2024-11-17 19:38:26.082212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.841 [2024-11-17 19:38:26.082246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.841 [2024-11-17 19:38:26.082265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.841 [2024-11-17 19:38:26.086694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.841 [2024-11-17 19:38:26.086725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.841 [2024-11-17 19:38:26.086741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.841 [2024-11-17 19:38:26.091624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.841 [2024-11-17 19:38:26.091657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.841 [2024-11-17 19:38:26.091684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.841 [2024-11-17 19:38:26.096058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.841 [2024-11-17 19:38:26.096089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.841 [2024-11-17 19:38:26.096106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.841 [2024-11-17 19:38:26.100835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:27.841 [2024-11-17 19:38:26.100865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.841 [2024-11-17 19:38:26.100881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.106456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.106490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.106509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.112228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.112271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.112291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.118118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.118152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.118172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.123699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.123747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.123764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.129155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.129190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.129209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.135081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.135112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.135129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.140969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.141000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.141017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.146083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.146114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.146130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.151941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.151990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.152010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.158059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.158089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.158112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.164726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.164772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.164790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.170460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.170494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.170513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.175951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.175982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.175998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.181456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.181490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.181510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.186083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.186116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.186135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.191365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.191399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.191417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.196118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.196151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.196169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.201351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.201385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.201404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.206719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.206759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.206778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.212252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.103 [2024-11-17 19:38:26.212286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.103 [2024-11-17 19:38:26.212305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.103 [2024-11-17 19:38:26.217983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.218017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.218037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.223584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.223618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.223637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.228871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.228901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.228918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.234338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.234372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.234392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.240080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.240114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.240133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.245875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.245921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.245938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.251408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.251442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.251461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.256881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.256925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.256942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.261452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.261483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.261502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.266565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.266598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.266617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.272341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.272374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.272393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.277884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.277914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.277930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.282961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.282991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.283008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.288209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.288243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.288262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.293868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.293899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.293918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.299543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.299577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.299602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.305285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.305320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.305339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.310972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.311018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.311038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.316277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.316312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.316346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.321923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.321955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.321973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.328418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.328452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.328471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.335441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.335475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.335494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.341075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.341110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.341129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.346277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.346307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.346325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.350659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.350702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.350721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.355268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.355298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.355316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.104 [2024-11-17 19:38:26.359911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.104 [2024-11-17 19:38:26.359942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.104 [2024-11-17 19:38:26.359958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.105 [2024-11-17 19:38:26.364436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.105 [2024-11-17 19:38:26.364466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.105 [2024-11-17 19:38:26.364484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.105 [2024-11-17 19:38:26.368561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.105 [2024-11-17 19:38:26.368591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.105 [2024-11-17 19:38:26.368608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.363 [2024-11-17 19:38:26.373482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.363 [2024-11-17 19:38:26.373517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.363 [2024-11-17 19:38:26.373536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.363 [2024-11-17 19:38:26.379337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.363 [2024-11-17 19:38:26.379368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.363 [2024-11-17 19:38:26.379385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.363 [2024-11-17 19:38:26.384636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.363 [2024-11-17 19:38:26.384666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.363 [2024-11-17 19:38:26.384691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.390024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.390054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.390070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.395580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.395610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.395627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.401154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.401188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.401207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.406658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.406701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.406735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.412473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.412507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.412526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.418223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.418259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.418278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.424008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.424040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.424058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.429575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.429608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.429627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.434969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.435000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.435016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.440569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.440601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.440627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.446621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.446654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.446680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.452239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.452273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.452292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.457668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.457725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.457742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.462787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.462821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.462838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.468245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.468280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.468299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.473645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.473689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.473711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.479137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.479171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.479190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.484486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.484519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.484538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.489841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.489871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.489888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.495470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.495503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.495522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.500398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.500432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.500451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.504318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.504351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.504369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.509303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.509348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.509367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.515165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.515198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.515217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.521713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.521767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.521785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.527170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.527204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.364 [2024-11-17 19:38:26.527223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.364 [2024-11-17 19:38:26.531970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.364 [2024-11-17 19:38:26.532001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.532024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.536840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.536870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.536887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.541346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.541379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.541397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.546029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.546059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.546076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.550346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.550379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.550398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.555217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.555250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.555269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.559581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.559615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.559633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.563922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.563953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.563969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.568930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.568959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.568992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.573829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.573878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.573895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.579616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.579650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.579669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.585190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.585224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.585243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.590581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.590615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.590634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.594289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.594318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.594335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.597885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.597914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.597930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.602375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.602405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.602421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.609585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.609630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.609647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.616484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.616519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.616538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.365 [2024-11-17 19:38:26.624041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.365 [2024-11-17 19:38:26.624072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.365 [2024-11-17 19:38:26.624089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.631534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.631583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.631602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.638290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.638324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.638342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.643491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.643524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.643543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.648318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.648355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.648375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.653864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.653895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.653912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.659536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.659570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.659589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.665050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.665099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.665118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.670672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.670729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.670753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.676597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.676631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.676650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.682458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.682493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.682512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.688051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.688096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.688113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.693844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.693875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.693906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.699761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.699791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.699808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.705821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.705852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.705869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.711931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.711964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.711981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.717745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.624 [2024-11-17 19:38:26.717775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.624 [2024-11-17 19:38:26.717791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.624 [2024-11-17 19:38:26.722831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.722867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.722885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.728036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.728065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.728099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.732968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.732999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.733016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.738441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.738473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.738491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.745105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.745139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.745159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.752316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.752360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.752379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.758556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.758590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.758609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.765437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.765467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.765484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.772626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.772656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.772687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.778604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.778637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.778656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.782270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.782305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.782324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.786978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.787024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.787044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.625 [2024-11-17 19:38:26.791192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13732f0) 00:29:28.625 [2024-11-17 19:38:26.791238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.625 [2024-11-17 19:38:26.791257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.625 00:29:28.625 Latency(us) 00:29:28.625 [2024-11-17T18:38:26.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.625 [2024-11-17T18:38:26.892Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:28.625 nvme0n1 : 2.00 5791.13 723.89 0.00 0.00 2758.26 734.25 8301.23 00:29:28.625 [2024-11-17T18:38:26.892Z] =================================================================================================================== 00:29:28.625 [2024-11-17T18:38:26.892Z] Total : 5791.13 723.89 0.00 0.00 2758.26 734.25 8301.23 00:29:28.625 0 00:29:28.625 19:38:26 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:28.625 19:38:26 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:28.625 19:38:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:28.625 19:38:26 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:28.625 | .driver_specific 00:29:28.625 | .nvme_error 00:29:28.625 | .status_code 00:29:28.625 | .command_transient_transport_error' 00:29:28.883 19:38:27 -- host/digest.sh@71 -- # (( 373 > 0 )) 00:29:28.883 19:38:27 -- host/digest.sh@73 -- # killprocess 1323195 00:29:28.883 19:38:27 -- common/autotest_common.sh@936 -- # '[' -z 1323195 ']' 00:29:28.883 19:38:27 -- common/autotest_common.sh@940 -- # kill -0 1323195 00:29:28.883 19:38:27 -- common/autotest_common.sh@941 -- # uname 00:29:28.883 19:38:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:28.883 19:38:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1323195 00:29:28.883 19:38:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:28.883 19:38:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:28.883 19:38:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1323195' 00:29:28.883 killing process with pid 1323195 00:29:28.883 19:38:27 -- common/autotest_common.sh@955 -- # kill 1323195 00:29:28.883 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.883 00:29:28.883 Latency(us) 00:29:28.883 [2024-11-17T18:38:27.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.883 [2024-11-17T18:38:27.150Z] =================================================================================================================== 00:29:28.883 [2024-11-17T18:38:27.150Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.883 19:38:27 -- common/autotest_common.sh@960 -- # wait 1323195 00:29:29.141 19:38:27 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:29.141 19:38:27 -- host/digest.sh@54 -- # local rw bs qd 00:29:29.141 19:38:27 -- host/digest.sh@56 -- # rw=randwrite 00:29:29.141 19:38:27 -- host/digest.sh@56 -- # bs=4096 00:29:29.141 19:38:27 -- host/digest.sh@56 -- # qd=128 00:29:29.141 19:38:27 -- host/digest.sh@58 -- # bperfpid=1323743 00:29:29.141 19:38:27 -- host/digest.sh@60 -- # waitforlisten 1323743 /var/tmp/bperf.sock 00:29:29.141 19:38:27 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:29.141 19:38:27 -- common/autotest_common.sh@829 -- # '[' -z 1323743 ']' 00:29:29.141 19:38:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.141 19:38:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.141 19:38:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.141 19:38:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.141 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:29:29.141 [2024-11-17 19:38:27.353907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:29.141 [2024-11-17 19:38:27.354002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323743 ] 00:29:29.141 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.399 [2024-11-17 19:38:27.418769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.399 [2024-11-17 19:38:27.508638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.332 19:38:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:30.332 19:38:28 -- common/autotest_common.sh@862 -- # return 0 00:29:30.332 19:38:28 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.332 19:38:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.617 19:38:28 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:30.617 19:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.617 19:38:28 -- common/autotest_common.sh@10 -- # set +x 00:29:30.617 19:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.617 19:38:28 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.617 19:38:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.917 nvme0n1 00:29:30.917 19:38:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:30.917 19:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.917 19:38:28 -- common/autotest_common.sh@10 -- # set +x 00:29:30.917 19:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.917 19:38:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:30.917 19:38:28 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:30.917 Running I/O for 2 seconds... 00:29:30.917 [2024-11-17 19:38:29.090177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f6890 00:29:30.917 [2024-11-17 19:38:29.090670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.917 [2024-11-17 19:38:29.090718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:30.917 [2024-11-17 19:38:29.103936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f6458 00:29:30.917 [2024-11-17 19:38:29.105362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.917 [2024-11-17 19:38:29.105397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:30.917 [2024-11-17 19:38:29.116376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f5378 00:29:30.917 [2024-11-17 19:38:29.117774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.917 [2024-11-17 19:38:29.117806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:30.917 [2024-11-17 19:38:29.128685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f31b8 00:29:30.918 [2024-11-17 19:38:29.130113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.918 [2024-11-17 19:38:29.130147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:30.918 [2024-11-17 19:38:29.141084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190eff18 00:29:30.918 [2024-11-17 19:38:29.142602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.918 [2024-11-17 19:38:29.142635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.918 [2024-11-17 19:38:29.153355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190eff18 00:29:30.918 [2024-11-17 19:38:29.154834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.918 [2024-11-17 19:38:29.154870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.168597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e3d08 00:29:31.176 [2024-11-17 19:38:29.169861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.169899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.177826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e38d0 00:29:31.176 [2024-11-17 19:38:29.178081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.178113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.193578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f5378 00:29:31.176 [2024-11-17 19:38:29.194536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.194569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.204308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e38d0 00:29:31.176 [2024-11-17 19:38:29.205616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.205656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.216654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f9f68 00:29:31.176 [2024-11-17 19:38:29.216812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.216840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.228904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e0ea0 00:29:31.176 [2024-11-17 19:38:29.229030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.229063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.241484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e1b48 00:29:31.176 [2024-11-17 19:38:29.241571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.241602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.255951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e01f8 00:29:31.176 [2024-11-17 19:38:29.257428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.257462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.268117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e3d08 00:29:31.176 [2024-11-17 19:38:29.269586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.269621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.280388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e0ea0 00:29:31.176 [2024-11-17 19:38:29.281888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.281918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.292754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f9b30 00:29:31.176 [2024-11-17 19:38:29.294216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.294249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.305145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ed4e8 00:29:31.176 [2024-11-17 19:38:29.306359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.306393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.317368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f6890 00:29:31.176 [2024-11-17 19:38:29.318857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.318891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.329497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f92c0 00:29:31.176 [2024-11-17 19:38:29.330933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.330963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.341571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190eb760 00:29:31.176 [2024-11-17 19:38:29.343111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.343145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.353933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e38d0 00:29:31.176 [2024-11-17 19:38:29.355531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.355564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.366212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e38d0 00:29:31.176 [2024-11-17 19:38:29.367815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.367858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.377783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df550 00:29:31.176 [2024-11-17 19:38:29.378211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.378262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.390296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e3498 00:29:31.176 [2024-11-17 19:38:29.391295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.391328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.402028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ed4e8 00:29:31.176 [2024-11-17 19:38:29.402624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.402656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.416539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190eea00 00:29:31.176 [2024-11-17 19:38:29.416997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.417024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.428597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fd208 00:29:31.176 [2024-11-17 19:38:29.429116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.429148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:31.176 [2024-11-17 19:38:29.440653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f2d80 00:29:31.176 [2024-11-17 19:38:29.441760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.176 [2024-11-17 19:38:29.441790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:31.433 [2024-11-17 19:38:29.452526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190de038 00:29:31.433 [2024-11-17 19:38:29.452703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.433 [2024-11-17 19:38:29.452731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:31.433 [2024-11-17 19:38:29.466603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f3a28 00:29:31.433 [2024-11-17 19:38:29.467385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.433 [2024-11-17 19:38:29.467418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.477551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e0630 00:29:31.434 [2024-11-17 19:38:29.477817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.477845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.492763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ddc00 00:29:31.434 [2024-11-17 19:38:29.494897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.494927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.504770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190eb328 00:29:31.434 [2024-11-17 19:38:29.505525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.505557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.516784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e95a0 00:29:31.434 [2024-11-17 19:38:29.517149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.517193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.528579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f81e0 00:29:31.434 [2024-11-17 19:38:29.528891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.528926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.541633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fc128 00:29:31.434 [2024-11-17 19:38:29.542307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.542341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.553758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e8088 00:29:31.434 [2024-11-17 19:38:29.555450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.555484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.566856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190feb58 00:29:31.434 [2024-11-17 19:38:29.568443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.568476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.579845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e0ea0 00:29:31.434 [2024-11-17 19:38:29.581062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.581107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.590474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ed920 00:29:31.434 [2024-11-17 19:38:29.591141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.591175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.602965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e12d8 00:29:31.434 [2024-11-17 19:38:29.604103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.604136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.615095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f96f8 00:29:31.434 [2024-11-17 19:38:29.616255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.616288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.627470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f96f8 00:29:31.434 [2024-11-17 19:38:29.628561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.628593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.639842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f96f8 00:29:31.434 [2024-11-17 19:38:29.640918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.640946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.651594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fbcf0 00:29:31.434 [2024-11-17 19:38:29.651719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.651760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.664100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f0ff8 00:29:31.434 [2024-11-17 19:38:29.664383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.664411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.678320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e9168 00:29:31.434 [2024-11-17 19:38:29.680060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.680093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.434 [2024-11-17 19:38:29.691512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fda78 00:29:31.434 [2024-11-17 19:38:29.692458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.434 [2024-11-17 19:38:29.692490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.702647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ebfd0 00:29:31.692 [2024-11-17 19:38:29.703993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.704027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.714246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e5220 00:29:31.692 [2024-11-17 19:38:29.715701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.715734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.726796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190de8a8 00:29:31.692 [2024-11-17 19:38:29.727684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.727730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.739037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fbcf0 00:29:31.692 [2024-11-17 19:38:29.739619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.739651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.751195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fd640 00:29:31.692 [2024-11-17 19:38:29.751763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.751790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.763379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f7100 00:29:31.692 [2024-11-17 19:38:29.763934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.763981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.775711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ecc78 00:29:31.692 [2024-11-17 19:38:29.776294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.776326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.788001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e73e0 00:29:31.692 [2024-11-17 19:38:29.788586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.788618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.800423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e9168 00:29:31.692 [2024-11-17 19:38:29.800923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.800967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.812693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f8e88 00:29:31.692 [2024-11-17 19:38:29.813111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.813144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.825069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190feb58 00:29:31.692 [2024-11-17 19:38:29.825565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.825598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.837284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190de038 00:29:31.692 [2024-11-17 19:38:29.838232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.838264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.849408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f5378 00:29:31.692 [2024-11-17 19:38:29.850853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.850903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.862221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e49b0 00:29:31.692 [2024-11-17 19:38:29.863588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.863621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.874605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ed0b0 00:29:31.692 [2024-11-17 19:38:29.875954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.876001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.886871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ef6a8 00:29:31.692 [2024-11-17 19:38:29.888143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.888200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.899171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ea680 00:29:31.692 [2024-11-17 19:38:29.900502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.900534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.911442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190eb760 00:29:31.692 [2024-11-17 19:38:29.912843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.912873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.923986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e23b8 00:29:31.692 [2024-11-17 19:38:29.924398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.924440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.936424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190eee38 00:29:31.692 [2024-11-17 19:38:29.937078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.937111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:31.692 [2024-11-17 19:38:29.948765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ed920 00:29:31.692 [2024-11-17 19:38:29.949363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.692 [2024-11-17 19:38:29.949396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:29.961101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df118 00:29:31.951 [2024-11-17 19:38:29.961667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:29.961709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:29.973446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ea680 00:29:31.951 [2024-11-17 19:38:29.974006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:29.974040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:29.985657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df118 00:29:31.951 [2024-11-17 19:38:29.986190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:29.986234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:29.998297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ec408 00:29:31.951 [2024-11-17 19:38:29.999335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:29.999368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.017154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e4578 00:29:31.951 [2024-11-17 19:38:30.018529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.018564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.029874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e7818 00:29:31.951 [2024-11-17 19:38:30.030757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.030790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.042242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df550 00:29:31.951 [2024-11-17 19:38:30.043612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.043647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.054798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fc560 00:29:31.951 [2024-11-17 19:38:30.055194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.055238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.067495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f57b0 00:29:31.951 [2024-11-17 19:38:30.068196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.068232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.080372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f5378 00:29:31.951 [2024-11-17 19:38:30.080945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.080988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.094651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f81e0 00:29:31.951 [2024-11-17 19:38:30.095982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.096036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.105856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e1b48 00:29:31.951 [2024-11-17 19:38:30.106427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.106460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.118417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f7538 00:29:31.951 [2024-11-17 19:38:30.119895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.119925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.130999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ed920 00:29:31.951 [2024-11-17 19:38:30.131371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.131398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.145326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fd640 00:29:31.951 [2024-11-17 19:38:30.146145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.146177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:31.951 [2024-11-17 19:38:30.157024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fd640 00:29:31.951 [2024-11-17 19:38:30.159038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.951 [2024-11-17 19:38:30.159071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:31.952 [2024-11-17 19:38:30.169549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ec408 00:29:31.952 [2024-11-17 19:38:30.171308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.952 [2024-11-17 19:38:30.171342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.952 [2024-11-17 19:38:30.181845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f4b08 00:29:31.952 [2024-11-17 19:38:30.183228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.952 [2024-11-17 19:38:30.183267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:31.952 [2024-11-17 19:38:30.194652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f4b08 00:29:31.952 [2024-11-17 19:38:30.196268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.952 [2024-11-17 19:38:30.196301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:31.952 [2024-11-17 19:38:30.207387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fb480 00:29:31.952 [2024-11-17 19:38:30.208084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.952 [2024-11-17 19:38:30.208117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.220008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6fa8 00:29:32.211 [2024-11-17 19:38:30.220903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.220946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.232533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e3498 00:29:32.211 [2024-11-17 19:38:30.233414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.233447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.245018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6738 00:29:32.211 [2024-11-17 19:38:30.245880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.245911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.257141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e99d8 00:29:32.211 [2024-11-17 19:38:30.257926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.257979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.268646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ea680 00:29:32.211 [2024-11-17 19:38:30.269372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.269411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.280109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fc128 00:29:32.211 [2024-11-17 19:38:30.280783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.280827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.291583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fc998 00:29:32.211 [2024-11-17 19:38:30.292340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.292370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.303216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fe720 00:29:32.211 [2024-11-17 19:38:30.304346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.304375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.314501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ebb98 00:29:32.211 [2024-11-17 19:38:30.316151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.316215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.327086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f20d8 00:29:32.211 [2024-11-17 19:38:30.327862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.327890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.337605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fe2e8 00:29:32.211 [2024-11-17 19:38:30.338798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.338828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.348627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df988 00:29:32.211 [2024-11-17 19:38:30.349934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.349964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.360458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f8618 00:29:32.211 [2024-11-17 19:38:30.361220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.361248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.372061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f2510 00:29:32.211 [2024-11-17 19:38:30.372578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.372607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.383755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e49b0 00:29:32.211 [2024-11-17 19:38:30.384168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.384197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.395298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e12d8 00:29:32.211 [2024-11-17 19:38:30.395697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.395727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.407015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f8a50 00:29:32.211 [2024-11-17 19:38:30.407396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.407425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.418590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e12d8 00:29:32.211 [2024-11-17 19:38:30.418957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.418998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.430304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e49b0 00:29:32.211 [2024-11-17 19:38:30.430596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.430638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.441773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f2510 00:29:32.211 [2024-11-17 19:38:30.442046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.442076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.453385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f5378 00:29:32.211 [2024-11-17 19:38:30.453744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.453774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:32.211 [2024-11-17 19:38:30.465079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e3498 00:29:32.211 [2024-11-17 19:38:30.465437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.211 [2024-11-17 19:38:30.465467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.478757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6b70 00:29:32.470 [2024-11-17 19:38:30.480540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.480570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.489103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fe2e8 00:29:32.470 [2024-11-17 19:38:30.490172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.490206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.500925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fe2e8 00:29:32.470 [2024-11-17 19:38:30.502453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.502481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.511760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fb480 00:29:32.470 [2024-11-17 19:38:30.512958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.512988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.523828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e4578 00:29:32.470 [2024-11-17 19:38:30.524931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.524961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.535543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e5ec8 00:29:32.470 [2024-11-17 19:38:30.536916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.536946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.547320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fc128 00:29:32.470 [2024-11-17 19:38:30.548391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.548420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.558738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ec840 00:29:32.470 [2024-11-17 19:38:30.559878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.559944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.570695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f7100 00:29:32.470 [2024-11-17 19:38:30.571211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.571241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.582576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df118 00:29:32.470 [2024-11-17 19:38:30.583773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.583803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.593814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e88f8 00:29:32.470 [2024-11-17 19:38:30.594926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.594955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.605282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df550 00:29:32.470 [2024-11-17 19:38:30.606582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.606627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.616501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df550 00:29:32.470 [2024-11-17 19:38:30.617716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.617745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.627680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df550 00:29:32.470 [2024-11-17 19:38:30.628928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.628966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.639000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190df118 00:29:32.470 [2024-11-17 19:38:30.639808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.639835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.650258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ebb98 00:29:32.470 [2024-11-17 19:38:30.651624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.651655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.661659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190ecc78 00:29:32.470 [2024-11-17 19:38:30.663183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.663213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:32.470 [2024-11-17 19:38:30.675616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fa7d8 00:29:32.470 [2024-11-17 19:38:30.676572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.470 [2024-11-17 19:38:30.676599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:32.471 [2024-11-17 19:38:30.685541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fe720 00:29:32.471 [2024-11-17 19:38:30.686874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.471 [2024-11-17 19:38:30.686904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:32.471 [2024-11-17 19:38:30.696743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fe720 00:29:32.471 [2024-11-17 19:38:30.697772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.471 [2024-11-17 19:38:30.697802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:32.471 [2024-11-17 19:38:30.707998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e4140 00:29:32.471 [2024-11-17 19:38:30.708995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.471 [2024-11-17 19:38:30.709025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:32.471 [2024-11-17 19:38:30.719253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e88f8 00:29:32.471 [2024-11-17 19:38:30.720467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.471 [2024-11-17 19:38:30.720497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:32.471 [2024-11-17 19:38:30.730777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190f20d8 00:29:32.471 [2024-11-17 19:38:30.731902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.471 [2024-11-17 19:38:30.731931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:32.729 [2024-11-17 19:38:30.742315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fc560 00:29:32.729 [2024-11-17 19:38:30.743647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.729 [2024-11-17 19:38:30.743686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:32.729 [2024-11-17 19:38:30.753774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190fcdd0 00:29:32.729 [2024-11-17 19:38:30.754891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.729 [2024-11-17 19:38:30.754922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:32.729 [2024-11-17 19:38:30.766718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.729 [2024-11-17 19:38:30.766923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.729 [2024-11-17 19:38:30.766953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.729 [2024-11-17 19:38:30.779800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.729 [2024-11-17 19:38:30.780018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.729 [2024-11-17 19:38:30.780047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.729 [2024-11-17 19:38:30.792531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.729 [2024-11-17 19:38:30.792786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.792824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.805562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.805787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.818570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.818795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.818824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.831455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.831680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.831710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.844489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.844731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.844774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.857353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.857573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.857602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.870426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.870657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.870709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.883475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.883696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.883725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.896230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.896474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.896503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.909563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.909811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.909840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.922604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.922822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.922851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.935714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.935962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.935991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.948795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.949044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.949073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.961830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.962050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.962078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.974924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.975174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.975203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.730 [2024-11-17 19:38:30.987814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.730 [2024-11-17 19:38:30.988030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.730 [2024-11-17 19:38:30.988060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.989 [2024-11-17 19:38:31.000332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.989 [2024-11-17 19:38:31.000562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.989 [2024-11-17 19:38:31.000591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.989 [2024-11-17 19:38:31.013084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.989 [2024-11-17 19:38:31.013302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.989 [2024-11-17 19:38:31.013331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.989 [2024-11-17 19:38:31.026071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.989 [2024-11-17 19:38:31.026290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.989 [2024-11-17 19:38:31.026319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.989 [2024-11-17 19:38:31.039100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.989 [2024-11-17 19:38:31.039302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.989 [2024-11-17 19:38:31.039331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.989 [2024-11-17 19:38:31.052048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.989 [2024-11-17 19:38:31.052267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.989 [2024-11-17 19:38:31.052296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.989 [2024-11-17 19:38:31.065029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.989 [2024-11-17 19:38:31.065262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.989 [2024-11-17 19:38:31.065307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.989 [2024-11-17 19:38:31.078141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19920) with pdu=0x2000190e6300 00:29:32.989 [2024-11-17 19:38:31.078387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.989 [2024-11-17 19:38:31.078417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.989 00:29:32.989 Latency(us) 00:29:32.989 [2024-11-17T18:38:31.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.989 [2024-11-17T18:38:31.256Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.989 nvme0n1 : 2.01 20661.63 80.71 0.00 0.00 6181.71 2718.53 16019.91 00:29:32.989 [2024-11-17T18:38:31.256Z] =================================================================================================================== 00:29:32.989 [2024-11-17T18:38:31.256Z] Total : 20661.63 80.71 0.00 0.00 6181.71 2718.53 16019.91 00:29:32.989 0 00:29:32.989 19:38:31 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:32.989 19:38:31 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:32.989 19:38:31 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:32.989 | .driver_specific 00:29:32.989 | .nvme_error 00:29:32.989 | .status_code 00:29:32.989 | .command_transient_transport_error' 00:29:32.989 19:38:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:33.247 19:38:31 -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:33.247 19:38:31 -- host/digest.sh@73 -- # killprocess 1323743 00:29:33.247 19:38:31 -- common/autotest_common.sh@936 -- # '[' -z 1323743 ']' 00:29:33.247 19:38:31 -- common/autotest_common.sh@940 -- # kill -0 1323743 00:29:33.247 19:38:31 -- common/autotest_common.sh@941 -- # uname 00:29:33.247 19:38:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:33.247 19:38:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1323743 00:29:33.247 19:38:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:33.247 19:38:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:33.247 19:38:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1323743' 00:29:33.247 killing process with pid 1323743 00:29:33.247 19:38:31 -- common/autotest_common.sh@955 -- # kill 1323743 00:29:33.247 Received shutdown signal, test time was about 2.000000 seconds 00:29:33.247 00:29:33.247 Latency(us) 00:29:33.247 [2024-11-17T18:38:31.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.247 [2024-11-17T18:38:31.514Z] =================================================================================================================== 00:29:33.247 [2024-11-17T18:38:31.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.247 19:38:31 -- common/autotest_common.sh@960 -- # wait 1323743 00:29:33.505 19:38:31 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:33.505 19:38:31 -- host/digest.sh@54 -- # local rw bs qd 00:29:33.505 19:38:31 -- host/digest.sh@56 -- # rw=randwrite 00:29:33.505 19:38:31 -- host/digest.sh@56 -- # bs=131072 00:29:33.505 19:38:31 -- host/digest.sh@56 -- # qd=16 00:29:33.505 19:38:31 -- host/digest.sh@58 -- # bperfpid=1324248 00:29:33.505 19:38:31 -- host/digest.sh@60 -- # waitforlisten 1324248 /var/tmp/bperf.sock 00:29:33.505 19:38:31 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:33.505 19:38:31 -- common/autotest_common.sh@829 -- # '[' -z 1324248 ']' 00:29:33.505 19:38:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.505 19:38:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.505 19:38:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.505 19:38:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.505 19:38:31 -- common/autotest_common.sh@10 -- # set +x 00:29:33.506 [2024-11-17 19:38:31.662292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:33.506 [2024-11-17 19:38:31.662370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324248 ] 00:29:33.506 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:33.506 Zero copy mechanism will not be used. 00:29:33.506 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.506 [2024-11-17 19:38:31.727500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.764 [2024-11-17 19:38:31.817459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.764 19:38:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.764 19:38:31 -- common/autotest_common.sh@862 -- # return 0 00:29:33.764 19:38:31 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:33.764 19:38:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.022 19:38:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:34.022 19:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.022 19:38:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.022 19:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.022 19:38:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.022 19:38:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.589 nvme0n1 00:29:34.589 19:38:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:34.589 19:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.589 19:38:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.589 19:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.589 19:38:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:34.589 19:38:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:34.589 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:34.589 Zero copy mechanism will not be used. 00:29:34.589 Running I/O for 2 seconds... 00:29:34.589 [2024-11-17 19:38:32.721986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.722317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.722355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.726908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.727065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.727095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.731377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.731500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.731529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.735826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.735963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.735992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.740166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.740279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.740307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.744454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.744547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.744575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.748737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.748883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.748911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.753420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.753616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.753649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.757928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.758144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.758180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.762742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.762895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.762924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.767380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.767456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.767483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.771955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.772053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.772082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.776659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.776745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.776772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.781499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.781603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.781630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.786186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.786351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.786379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.791555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.791783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.791813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.795923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.796124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.796152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.800761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.800976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.801004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.805801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.805958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.805988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.810189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.810310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.810339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.814705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.814836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.814864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.819186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.819308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.819336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.589 [2024-11-17 19:38:32.823560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.589 [2024-11-17 19:38:32.823735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.589 [2024-11-17 19:38:32.823765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.590 [2024-11-17 19:38:32.828848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.590 [2024-11-17 19:38:32.829088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.590 [2024-11-17 19:38:32.829116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.590 [2024-11-17 19:38:32.833594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.590 [2024-11-17 19:38:32.833774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.590 [2024-11-17 19:38:32.833803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.590 [2024-11-17 19:38:32.837857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.590 [2024-11-17 19:38:32.837983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.590 [2024-11-17 19:38:32.838018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.590 [2024-11-17 19:38:32.842051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.590 [2024-11-17 19:38:32.842164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.590 [2024-11-17 19:38:32.842192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.590 [2024-11-17 19:38:32.846243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.590 [2024-11-17 19:38:32.846365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.590 [2024-11-17 19:38:32.846394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.590 [2024-11-17 19:38:32.850579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.590 [2024-11-17 19:38:32.850703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.590 [2024-11-17 19:38:32.850731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.849 [2024-11-17 19:38:32.855588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.849 [2024-11-17 19:38:32.855805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.849 [2024-11-17 19:38:32.855834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.849 [2024-11-17 19:38:32.861068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.849 [2024-11-17 19:38:32.861226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.849 [2024-11-17 19:38:32.861255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.849 [2024-11-17 19:38:32.867004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.849 [2024-11-17 19:38:32.867245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.849 [2024-11-17 19:38:32.867273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.849 [2024-11-17 19:38:32.871496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.871694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.871724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.875736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.875885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.875914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.880557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.880707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.880736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.885404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.885509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.885537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.889521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.889618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.889645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.893620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.893719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.893750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.897797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.897890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.897917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.902153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.902363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.902391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.906337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.906507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.906535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.910517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.910619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.910648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.914699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.914800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.914828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.918873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.918981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.919009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.922946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.923055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.923083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.927329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.927477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.927506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.932354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.932539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.932568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.937564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.937767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.937796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.943813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.943929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.943957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.948114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.948214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.948241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.952377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.952525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.952553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.956835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.956944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.956979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.961207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.961341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.961368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.965434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.850 [2024-11-17 19:38:32.965547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.850 [2024-11-17 19:38:32.965574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.850 [2024-11-17 19:38:32.970103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:32.970338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:32.970366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:32.975189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:32.975355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:32.975384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:32.980296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:32.980519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:32.980548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:32.986268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:32.986337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:32.986364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:32.990832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:32.990988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:32.991017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:32.994885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:32.994998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:32.995026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:32.999019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:32.999121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:32.999149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.003234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.003421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.003449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.007341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.007523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.007551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.011564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.011712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.011740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.015654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.015771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.015800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.019791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.019883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.019910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.023982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.024119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.024147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.028127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.028240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.028268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.032285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.032408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.032442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.036487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.036679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.036707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.040703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.040866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.040894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.044835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.044982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.045009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.048907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.049020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.049048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.052957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.053067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.053095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.057103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.057236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.057263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.061217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.061324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.851 [2024-11-17 19:38:33.061351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.851 [2024-11-17 19:38:33.065360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.851 [2024-11-17 19:38:33.065483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.065510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.069581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.069784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.069813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.073748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.073921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.073948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.077858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.078000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.078028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.081963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.082077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.082104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.086043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.086138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.086165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.090183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.090330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.090358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.094250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.094349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.094376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.098359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.098495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.098523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.102515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.102708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.102736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.106635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.106813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.106842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.852 [2024-11-17 19:38:33.110766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:34.852 [2024-11-17 19:38:33.110898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.852 [2024-11-17 19:38:33.110927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.112 [2024-11-17 19:38:33.114863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.112 [2024-11-17 19:38:33.114977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.112 [2024-11-17 19:38:33.115006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.112 [2024-11-17 19:38:33.118992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.112 [2024-11-17 19:38:33.119089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.112 [2024-11-17 19:38:33.119119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.112 [2024-11-17 19:38:33.123085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.112 [2024-11-17 19:38:33.123231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.123259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.127177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.127285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.127313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.131297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.131417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.131445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.135510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.135702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.135731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.139734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.139913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.139949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.143832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.143973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.144001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.147929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.148031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.148059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.152019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.152126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.152155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.156077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.156217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.156245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.160170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.160281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.160309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.164295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.164417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.164445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.168468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.168653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.168687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.172659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.172845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.172873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.176814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.176964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.176992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.180910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.181026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.181054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.184961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.185056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.185082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.189108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.189246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.189274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.193224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.193330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.193358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.197516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.197665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.197703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.202621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.202880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.202908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.207882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.208072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.208101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.212951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.213160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.213188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.218500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.113 [2024-11-17 19:38:33.218697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.113 [2024-11-17 19:38:33.218726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.113 [2024-11-17 19:38:33.223583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.223777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.223804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.228661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.228861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.228889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.233682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.233812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.233840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.238875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.239080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.239109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.243997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.244152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.244180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.249175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.249428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.249456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.254187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.254409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.254438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.259331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.259488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.259522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.264556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.264720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.264749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.269758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.269899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.269927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.274929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.275106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.275133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.280095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.280247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.280275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.285310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.285544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.285574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.290485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.290607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.290636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.295609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.295801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.295829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.300707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.300890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.300919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.305490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.305681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.305709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.310711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.310901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.310929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.315818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.315978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.316006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.321028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.321166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.321193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.326161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.326299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.326328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.331214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.331331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.114 [2024-11-17 19:38:33.331358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.114 [2024-11-17 19:38:33.336517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.114 [2024-11-17 19:38:33.336769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.115 [2024-11-17 19:38:33.336797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.115 [2024-11-17 19:38:33.341542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.115 [2024-11-17 19:38:33.341672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.115 [2024-11-17 19:38:33.341710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.115 [2024-11-17 19:38:33.346772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.115 [2024-11-17 19:38:33.346949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.115 [2024-11-17 19:38:33.346976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.115 [2024-11-17 19:38:33.351984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.115 [2024-11-17 19:38:33.352231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.115 [2024-11-17 19:38:33.352259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.115 [2024-11-17 19:38:33.357119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.115 [2024-11-17 19:38:33.357274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.115 [2024-11-17 19:38:33.357302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.115 [2024-11-17 19:38:33.362302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.115 [2024-11-17 19:38:33.362443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.115 [2024-11-17 19:38:33.362471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.115 [2024-11-17 19:38:33.367382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.115 [2024-11-17 19:38:33.367544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.115 [2024-11-17 19:38:33.367572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.115 [2024-11-17 19:38:33.372848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.115 [2024-11-17 19:38:33.373080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.115 [2024-11-17 19:38:33.373112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.115 [2024-11-17 19:38:33.377833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.374 [2024-11-17 19:38:33.377980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.374 [2024-11-17 19:38:33.378009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.374 [2024-11-17 19:38:33.383166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.374 [2024-11-17 19:38:33.383350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.374 [2024-11-17 19:38:33.383379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.374 [2024-11-17 19:38:33.388206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.374 [2024-11-17 19:38:33.388383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.374 [2024-11-17 19:38:33.388411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.374 [2024-11-17 19:38:33.393537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.374 [2024-11-17 19:38:33.393708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.374 [2024-11-17 19:38:33.393743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.374 [2024-11-17 19:38:33.398706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.374 [2024-11-17 19:38:33.398894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.374 [2024-11-17 19:38:33.398923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.374 [2024-11-17 19:38:33.403850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.374 [2024-11-17 19:38:33.404001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.374 [2024-11-17 19:38:33.404028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.374 [2024-11-17 19:38:33.408989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.409230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.409258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.414127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.414276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.414303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.419214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.419365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.419393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.424395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.424640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.424668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.429429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.429657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.429694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.434470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.434604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.434631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.439755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.440004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.440033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.444855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.445002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.445029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.449974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.450151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.450179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.455066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.455202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.455231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.460093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.460262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.460290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.465320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.465495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.465523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.470582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.470855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.470884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.475624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.475777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.475805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.480931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.481193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.481227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.485708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.485897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.485926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.489946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.490181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.490211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.494294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.494449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.494477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.498510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.498657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.498694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.502735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.502846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.502874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.507455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.507625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.507654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.513374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.513537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.513565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.518548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.518843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.518872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.523585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.523857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.523886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.528138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.528286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.528314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.532388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.532540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.532569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.536566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.536720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.536747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.540806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.540920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.540948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.545137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.545295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.545322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.549358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.549505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.549534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.375 [2024-11-17 19:38:33.553750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.375 [2024-11-17 19:38:33.553964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.375 [2024-11-17 19:38:33.553993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.557905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.558102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.558130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.562199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.562412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.562441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.568312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.568519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.568546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.573244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.573373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.573400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.577470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.577581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.577609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.581938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.582110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.582138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.586808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.586960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.586988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.591613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.591819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.591848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.596102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.596235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.596263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.600880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.600986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.601020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.605736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.605839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.605870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.610298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.610401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.610429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.614945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.615040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.615072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.619668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.619835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.619864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.624274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.624387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.624416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.629178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.629371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.629399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.633887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.634102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.634130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.376 [2024-11-17 19:38:33.639368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.376 [2024-11-17 19:38:33.639513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.376 [2024-11-17 19:38:33.639541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.644294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.644404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.644434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.649070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.649156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.649182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.654380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.654450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.654476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.659607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.659747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.659775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.664325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.664440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.664468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.669289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.669482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.669510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.674179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.674378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.674407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.678831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.678965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.678992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.683793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.683888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.683915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.688356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.688459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.688487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.692895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.692988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.693016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.697591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.697728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.697766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.702223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.635 [2024-11-17 19:38:33.702315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.635 [2024-11-17 19:38:33.702342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.635 [2024-11-17 19:38:33.707016] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.707206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.707234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.711668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.711876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.711905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.716471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.716605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.716633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.721141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.721276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.721304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.725865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.725933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.725965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.730532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.730604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.730630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.735203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.735365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.735391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.739835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.739932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.739962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.744456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.744680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.744710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.748969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.749118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.749146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.753387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.753537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.753566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.757612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.757726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.757754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.762225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.762315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.762342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.766556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.766667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.766702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.770741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.770896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.770925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.774930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.775057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.775084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.779128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.779343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.779370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.783411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.783608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.783636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.787825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.787985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.788013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.792052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.792180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.792208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.796293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.796396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.796424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.636 [2024-11-17 19:38:33.800419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.636 [2024-11-17 19:38:33.800517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.636 [2024-11-17 19:38:33.800543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.804807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.804965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.804993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.809009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.809133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.809160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.813291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.813507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.813535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.817470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.817662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.817697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.821756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.821928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.821957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.825964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.826081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.826109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.830207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.830312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.830340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.834304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.834406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.834434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.838472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.838622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.838655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.842629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.842758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.842788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.846902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.847135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.847181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.851065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.851258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.851286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.855342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.855502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.855529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.859477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.859582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.859610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.863559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.863651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.863685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.867671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.867778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.867806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.871959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.872120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.872148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.876213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.876352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.876379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.880642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.880867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.880896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.884834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.885042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.885071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.889036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.889196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.889224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.893284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.893398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.893427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.637 [2024-11-17 19:38:33.897474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.637 [2024-11-17 19:38:33.897603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.637 [2024-11-17 19:38:33.897630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.902170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.902338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.902366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.907383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.907613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.907641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.912715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.912907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.912936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.918799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.918904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.918932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.923917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.924145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.924174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.929066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.929235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.929263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.934151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.934329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.934357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.939220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.939347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.939374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.944348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.944592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.944620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.949800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.949897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.949928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.955029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.955282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.955311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.960187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.960312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.960344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.965446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.965711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.965740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.970523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.970751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.970779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.975776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.975930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.975960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.981091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.981267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.981294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.986453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.986643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.986671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.991230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.896 [2024-11-17 19:38:33.991422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.896 [2024-11-17 19:38:33.991451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.896 [2024-11-17 19:38:33.995813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:33.996031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:33.996060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.000258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.000445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.000473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.005737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.005879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.005908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.010115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.010206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.010232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.014341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.014454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.014482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.018615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.018750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.018778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.022951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.023106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.023135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.027220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.027340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.027368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.031507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.031729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.031757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.035794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.036015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.036044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.040023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.040191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.040219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.044243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.044373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.044402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.048429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.048524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.048551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.052670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.052785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.052811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.056951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.057114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.057142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.061145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.061267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.061295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.065483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.065704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.065732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.069719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.069920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.069948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.074017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.074198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.074226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.078770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.078993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.079026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.083912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.084052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.084079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.089235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.089425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.089453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.094884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.094971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.094997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.099312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.099473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.099501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.103778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.103972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.104001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.108193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.108305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.108333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.112526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.897 [2024-11-17 19:38:34.112637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.897 [2024-11-17 19:38:34.112664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.897 [2024-11-17 19:38:34.117533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.117801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.117829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.898 [2024-11-17 19:38:34.122876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.123091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.123119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.898 [2024-11-17 19:38:34.128839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.128971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.128998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.898 [2024-11-17 19:38:34.133938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.134110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.134138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.898 [2024-11-17 19:38:34.138274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.138430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.138458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.898 [2024-11-17 19:38:34.142734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.142824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.142850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.898 [2024-11-17 19:38:34.147280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.147436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.147464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.898 [2024-11-17 19:38:34.151700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.151811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.151838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.898 [2024-11-17 19:38:34.156254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.156363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.156390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.898 [2024-11-17 19:38:34.160617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:35.898 [2024-11-17 19:38:34.160784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.898 [2024-11-17 19:38:34.160814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.156 [2024-11-17 19:38:34.165017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.156 [2024-11-17 19:38:34.165138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.156 [2024-11-17 19:38:34.165166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.156 [2024-11-17 19:38:34.169576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.156 [2024-11-17 19:38:34.169793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.156 [2024-11-17 19:38:34.169821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.156 [2024-11-17 19:38:34.173995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.156 [2024-11-17 19:38:34.174144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.156 [2024-11-17 19:38:34.174172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.156 [2024-11-17 19:38:34.178409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.156 [2024-11-17 19:38:34.178625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.156 [2024-11-17 19:38:34.178653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.183331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.183565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.183593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.188487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.188751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.188779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.194182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.194297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.194324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.199571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.199699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.199728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.203879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.204032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.204067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.208342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.208502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.208530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.212779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.212916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.212944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.217002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.217121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.217150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.222143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.222360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.222388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.227279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.227414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.227456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.233847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.234016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.234044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.238439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.238555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.238581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.242835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.242987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.243015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.247089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.247283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.247312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.251952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.252137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.252165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.257169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.257266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.257295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.261564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.261701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.261730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.266683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.266830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.266858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.271839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.272050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.272078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.277509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.277713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.277741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.282618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.282803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.282832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.286858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.286984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.287012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.291439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.291612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.291640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.295762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.295886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.295914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.300083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.300196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.300224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.304430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.304527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.304555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.157 [2024-11-17 19:38:34.309729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.157 [2024-11-17 19:38:34.309802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-11-17 19:38:34.309828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.314319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.314450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.314478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.318432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.318531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.318559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.322617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.322747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.322776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.326906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.327091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.327128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.331329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.331489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.331517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.335482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.335629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.335657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.339762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.339865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.339892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.344058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.344163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.344191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.348320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.348465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.348492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.352452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.352551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.352578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.356744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.356873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.356900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.361083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.361269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.361297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.365364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.365526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.365555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.369629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.369771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.369799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.373803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.373908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.373936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.377928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.378034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.378062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.382197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.382336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.382363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.386301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.386409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.386436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.390517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.390645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.390672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.394964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.395150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.395178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.399178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.399367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.399395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.403425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.403566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.403593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.407556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.407652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.407688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.411792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.411901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.411928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.415949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.416074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.416102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.158 [2024-11-17 19:38:34.420124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.158 [2024-11-17 19:38:34.420233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-11-17 19:38:34.420264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.416 [2024-11-17 19:38:34.424267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.416 [2024-11-17 19:38:34.424393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.416 [2024-11-17 19:38:34.424422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.416 [2024-11-17 19:38:34.428558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.416 [2024-11-17 19:38:34.428750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.428778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.432839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.433054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.433085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.437182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.437322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.437354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.441461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.441563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.441591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.445584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.445687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.445714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.449886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.450035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.450063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.454024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.454122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.454148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.458153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.458280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.458308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.462574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.462765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.462793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.466851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.467020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.467048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.471056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.471193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.471222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.475201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.475311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.475339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.479359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.479457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.479485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.483687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.483842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.483870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.487931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.488037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.488065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.492168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.492291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.492317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.496460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.496661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.496699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.500708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.500864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.500892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.504942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.505109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.505136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.509737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.509806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.509833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.514400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.514538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.514564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.520761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.520925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.520953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.525915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.526057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.526085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.530312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.530459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.530488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.534731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.534930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.534959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.539827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.540104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.540150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.545200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.417 [2024-11-17 19:38:34.545488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-11-17 19:38:34.545517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-11-17 19:38:34.550578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.550865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.550912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.555843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.556007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.556041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.561209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.561368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.561399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.567204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.567440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.567486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.572480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.572710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.572739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.576796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.576930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.576959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.581227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.581428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.581456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.586569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.586651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.586685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.591564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.591656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.591696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.596054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.596265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.596293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.601329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.601484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.601512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.606482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.606708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.606737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.612659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.612859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.612888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.617427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.617632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.617660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.621990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.622225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.622253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.626454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.626610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.626638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.630936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.631041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.631069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.635300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.635424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.635452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.639765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.639889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.639917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.644336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.644448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.644476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.648977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.649199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.649228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.653417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.653556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.653584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.657997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.658173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.658201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.662443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.662593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.662621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.667071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.667165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.667206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.671500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.671628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.671656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.675953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.676117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-11-17 19:38:34.676145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.418 [2024-11-17 19:38:34.680317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.418 [2024-11-17 19:38:34.680426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.419 [2024-11-17 19:38:34.680459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.676 [2024-11-17 19:38:34.684783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.676 [2024-11-17 19:38:34.684947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.676 [2024-11-17 19:38:34.684975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.677 [2024-11-17 19:38:34.689144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.677 [2024-11-17 19:38:34.689288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.677 [2024-11-17 19:38:34.689317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.677 [2024-11-17 19:38:34.693643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.677 [2024-11-17 19:38:34.693887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.677 [2024-11-17 19:38:34.693915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.677 [2024-11-17 19:38:34.698185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.677 [2024-11-17 19:38:34.698342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.677 [2024-11-17 19:38:34.698371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.677 [2024-11-17 19:38:34.702689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.677 [2024-11-17 19:38:34.702827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.677 [2024-11-17 19:38:34.702854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.677 [2024-11-17 19:38:34.707337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.677 [2024-11-17 19:38:34.707449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.677 [2024-11-17 19:38:34.707477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.677 [2024-11-17 19:38:34.711945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.677 [2024-11-17 19:38:34.712081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.677 [2024-11-17 19:38:34.712110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.677 [2024-11-17 19:38:34.716396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c19c60) with pdu=0x2000190fef90 00:29:36.677 [2024-11-17 19:38:34.716521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.677 [2024-11-17 19:38:34.716550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.677 00:29:36.677 Latency(us) 00:29:36.677 [2024-11-17T18:38:34.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.677 [2024-11-17T18:38:34.944Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:36.677 nvme0n1 : 2.00 6704.34 838.04 0.00 0.00 2380.09 1784.04 6359.42 00:29:36.677 [2024-11-17T18:38:34.944Z] =================================================================================================================== 00:29:36.677 [2024-11-17T18:38:34.944Z] Total : 6704.34 838.04 0.00 0.00 2380.09 1784.04 6359.42 00:29:36.677 0 00:29:36.677 19:38:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:36.677 19:38:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:36.677 19:38:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:36.677 | .driver_specific 00:29:36.677 | .nvme_error 00:29:36.677 | .status_code 00:29:36.677 | .command_transient_transport_error' 00:29:36.677 19:38:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:36.935 19:38:34 -- host/digest.sh@71 -- # (( 432 > 0 )) 00:29:36.935 19:38:34 -- host/digest.sh@73 -- # killprocess 1324248 00:29:36.935 19:38:34 -- common/autotest_common.sh@936 -- # '[' -z 1324248 ']' 00:29:36.935 19:38:34 -- common/autotest_common.sh@940 -- # kill -0 1324248 00:29:36.935 19:38:34 -- common/autotest_common.sh@941 -- # uname 00:29:36.935 19:38:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:36.935 19:38:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1324248 00:29:36.935 19:38:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:36.935 19:38:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:36.935 19:38:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1324248' 00:29:36.935 killing process with pid 1324248 00:29:36.935 19:38:35 -- common/autotest_common.sh@955 -- # kill 1324248 00:29:36.935 Received shutdown signal, test time was about 2.000000 seconds 00:29:36.935 00:29:36.935 Latency(us) 00:29:36.935 [2024-11-17T18:38:35.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.935 [2024-11-17T18:38:35.202Z] =================================================================================================================== 00:29:36.935 [2024-11-17T18:38:35.202Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.935 19:38:35 -- common/autotest_common.sh@960 -- # wait 1324248 00:29:37.193 19:38:35 -- host/digest.sh@115 -- # killprocess 1322615 00:29:37.193 19:38:35 -- common/autotest_common.sh@936 -- # '[' -z 1322615 ']' 00:29:37.193 19:38:35 -- common/autotest_common.sh@940 -- # kill -0 1322615 00:29:37.193 19:38:35 -- common/autotest_common.sh@941 -- # uname 00:29:37.193 19:38:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:37.193 19:38:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1322615 00:29:37.193 19:38:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:37.193 19:38:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:37.193 19:38:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1322615' 00:29:37.193 killing process with pid 1322615 00:29:37.193 19:38:35 -- common/autotest_common.sh@955 -- # kill 1322615 00:29:37.193 19:38:35 -- common/autotest_common.sh@960 -- # wait 1322615 00:29:37.452 00:29:37.452 real 0m17.391s 00:29:37.452 user 0m34.946s 00:29:37.452 sys 0m4.570s 00:29:37.452 19:38:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:37.452 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:29:37.452 ************************************ 00:29:37.452 END TEST nvmf_digest_error 00:29:37.452 ************************************ 00:29:37.452 19:38:35 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:37.452 19:38:35 -- host/digest.sh@139 -- # nvmftestfini 00:29:37.452 19:38:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:37.452 19:38:35 -- nvmf/common.sh@116 -- # sync 00:29:37.452 19:38:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:37.452 19:38:35 -- nvmf/common.sh@119 -- # set +e 00:29:37.452 19:38:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:37.452 19:38:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:37.452 rmmod nvme_tcp 00:29:37.452 rmmod nvme_fabrics 00:29:37.452 rmmod nvme_keyring 00:29:37.452 19:38:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:37.452 19:38:35 -- nvmf/common.sh@123 -- # set -e 00:29:37.452 19:38:35 -- nvmf/common.sh@124 -- # return 0 00:29:37.452 19:38:35 -- nvmf/common.sh@477 -- # '[' -n 1322615 ']' 00:29:37.452 19:38:35 -- nvmf/common.sh@478 -- # killprocess 1322615 00:29:37.452 19:38:35 -- common/autotest_common.sh@936 -- # '[' -z 1322615 ']' 00:29:37.452 19:38:35 -- common/autotest_common.sh@940 -- # kill -0 1322615 00:29:37.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1322615) - No such process 00:29:37.452 19:38:35 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1322615 is not found' 00:29:37.452 Process with pid 1322615 is not found 00:29:37.452 19:38:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:37.452 19:38:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:37.452 19:38:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:37.452 19:38:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.452 19:38:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:37.452 19:38:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.452 19:38:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.452 19:38:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.982 19:38:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:39.983 00:29:39.983 real 0m37.143s 00:29:39.983 user 1m6.175s 00:29:39.983 sys 0m10.486s 00:29:39.983 19:38:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:39.983 19:38:37 -- common/autotest_common.sh@10 -- # set +x 00:29:39.983 ************************************ 00:29:39.983 END TEST nvmf_digest 00:29:39.983 ************************************ 00:29:39.983 19:38:37 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:39.983 19:38:37 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:39.983 19:38:37 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:39.983 19:38:37 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:39.983 19:38:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:39.983 19:38:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:39.983 19:38:37 -- common/autotest_common.sh@10 -- # set +x 00:29:39.983 ************************************ 00:29:39.983 START TEST nvmf_bdevperf 00:29:39.983 ************************************ 00:29:39.983 19:38:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:39.983 * Looking for test storage... 00:29:39.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.983 19:38:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:39.983 19:38:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:39.983 19:38:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:39.983 19:38:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:39.983 19:38:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:39.983 19:38:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:39.983 19:38:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:39.983 19:38:37 -- scripts/common.sh@335 -- # IFS=.-: 00:29:39.983 19:38:37 -- scripts/common.sh@335 -- # read -ra ver1 00:29:39.983 19:38:37 -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.983 19:38:37 -- scripts/common.sh@336 -- # read -ra ver2 00:29:39.983 19:38:37 -- scripts/common.sh@337 -- # local 'op=<' 00:29:39.983 19:38:37 -- scripts/common.sh@339 -- # ver1_l=2 00:29:39.983 19:38:37 -- scripts/common.sh@340 -- # ver2_l=1 00:29:39.983 19:38:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:39.983 19:38:37 -- scripts/common.sh@343 -- # case "$op" in 00:29:39.983 19:38:37 -- scripts/common.sh@344 -- # : 1 00:29:39.983 19:38:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:39.983 19:38:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.983 19:38:37 -- scripts/common.sh@364 -- # decimal 1 00:29:39.983 19:38:37 -- scripts/common.sh@352 -- # local d=1 00:29:39.983 19:38:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.983 19:38:37 -- scripts/common.sh@354 -- # echo 1 00:29:39.983 19:38:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:39.983 19:38:37 -- scripts/common.sh@365 -- # decimal 2 00:29:39.983 19:38:37 -- scripts/common.sh@352 -- # local d=2 00:29:39.983 19:38:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.983 19:38:37 -- scripts/common.sh@354 -- # echo 2 00:29:39.983 19:38:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:39.983 19:38:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:39.983 19:38:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:39.983 19:38:37 -- scripts/common.sh@367 -- # return 0 00:29:39.983 19:38:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.983 19:38:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:39.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.983 --rc genhtml_branch_coverage=1 00:29:39.983 --rc genhtml_function_coverage=1 00:29:39.983 --rc genhtml_legend=1 00:29:39.983 --rc geninfo_all_blocks=1 00:29:39.983 --rc geninfo_unexecuted_blocks=1 00:29:39.983 00:29:39.983 ' 00:29:39.983 19:38:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:39.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.983 --rc genhtml_branch_coverage=1 00:29:39.983 --rc genhtml_function_coverage=1 00:29:39.983 --rc genhtml_legend=1 00:29:39.983 --rc geninfo_all_blocks=1 00:29:39.983 --rc geninfo_unexecuted_blocks=1 00:29:39.983 00:29:39.983 ' 00:29:39.983 19:38:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:39.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.983 --rc genhtml_branch_coverage=1 00:29:39.983 --rc genhtml_function_coverage=1 00:29:39.983 --rc genhtml_legend=1 00:29:39.983 --rc geninfo_all_blocks=1 00:29:39.983 --rc geninfo_unexecuted_blocks=1 00:29:39.983 00:29:39.983 ' 00:29:39.983 19:38:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:39.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.983 --rc genhtml_branch_coverage=1 00:29:39.983 --rc genhtml_function_coverage=1 00:29:39.983 --rc genhtml_legend=1 00:29:39.983 --rc geninfo_all_blocks=1 00:29:39.983 --rc geninfo_unexecuted_blocks=1 00:29:39.983 00:29:39.983 ' 00:29:39.983 19:38:37 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.983 19:38:37 -- nvmf/common.sh@7 -- # uname -s 00:29:39.983 19:38:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.983 19:38:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.983 19:38:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.983 19:38:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.983 19:38:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.983 19:38:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.983 19:38:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.983 19:38:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.983 19:38:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.983 19:38:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.983 19:38:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.983 19:38:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.983 19:38:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.983 19:38:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.983 19:38:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.983 19:38:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.983 19:38:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.983 19:38:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.983 19:38:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.983 19:38:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.983 19:38:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.983 19:38:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.983 19:38:37 -- paths/export.sh@5 -- # export PATH 00:29:39.983 19:38:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.983 19:38:37 -- nvmf/common.sh@46 -- # : 0 00:29:39.983 19:38:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:39.983 19:38:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:39.983 19:38:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:39.983 19:38:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.983 19:38:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.983 19:38:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:39.983 19:38:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:39.983 19:38:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:39.983 19:38:37 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.983 19:38:37 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:39.983 19:38:37 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:39.983 19:38:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:39.983 19:38:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.983 19:38:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:39.983 19:38:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:39.983 19:38:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:39.983 19:38:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.983 19:38:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.983 19:38:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.983 19:38:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:39.983 19:38:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:39.983 19:38:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:39.983 19:38:37 -- common/autotest_common.sh@10 -- # set +x 00:29:41.884 19:38:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:41.885 19:38:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:41.885 19:38:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:41.885 19:38:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:41.885 19:38:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:41.885 19:38:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:41.885 19:38:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:41.885 19:38:39 -- nvmf/common.sh@294 -- # net_devs=() 00:29:41.885 19:38:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:41.885 19:38:39 -- nvmf/common.sh@295 -- # e810=() 00:29:41.885 19:38:39 -- nvmf/common.sh@295 -- # local -ga e810 00:29:41.885 19:38:39 -- nvmf/common.sh@296 -- # x722=() 00:29:41.885 19:38:39 -- nvmf/common.sh@296 -- # local -ga x722 00:29:41.885 19:38:39 -- nvmf/common.sh@297 -- # mlx=() 00:29:41.885 19:38:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:41.885 19:38:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.885 19:38:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:41.885 19:38:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:41.885 19:38:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:41.885 19:38:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:41.885 19:38:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.885 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.885 19:38:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:41.885 19:38:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.885 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.885 19:38:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:41.885 19:38:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:41.885 19:38:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.885 19:38:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:41.885 19:38:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.885 19:38:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.885 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.885 19:38:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.885 19:38:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:41.885 19:38:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.885 19:38:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:41.885 19:38:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.885 19:38:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.885 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.885 19:38:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.885 19:38:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:41.885 19:38:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:41.885 19:38:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:41.885 19:38:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.885 19:38:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.885 19:38:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.885 19:38:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:41.885 19:38:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.885 19:38:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.885 19:38:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:41.885 19:38:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.885 19:38:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.885 19:38:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:41.885 19:38:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:41.885 19:38:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.885 19:38:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.885 19:38:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.885 19:38:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.885 19:38:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:41.885 19:38:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.885 19:38:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.885 19:38:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.885 19:38:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:41.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:29:41.885 00:29:41.885 --- 10.0.0.2 ping statistics --- 00:29:41.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.885 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:41.885 19:38:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:29:41.885 00:29:41.885 --- 10.0.0.1 ping statistics --- 00:29:41.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.885 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:29:41.885 19:38:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.885 19:38:39 -- nvmf/common.sh@410 -- # return 0 00:29:41.885 19:38:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:41.885 19:38:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.885 19:38:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:41.885 19:38:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.885 19:38:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:41.885 19:38:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:41.885 19:38:39 -- host/bdevperf.sh@25 -- # tgt_init 00:29:41.885 19:38:39 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:41.885 19:38:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:41.885 19:38:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:41.885 19:38:39 -- common/autotest_common.sh@10 -- # set +x 00:29:41.885 19:38:39 -- nvmf/common.sh@469 -- # nvmfpid=1326680 00:29:41.885 19:38:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:41.885 19:38:39 -- nvmf/common.sh@470 -- # waitforlisten 1326680 00:29:41.885 19:38:39 -- common/autotest_common.sh@829 -- # '[' -z 1326680 ']' 00:29:41.885 19:38:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.885 19:38:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:41.885 19:38:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.885 19:38:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:41.885 19:38:39 -- common/autotest_common.sh@10 -- # set +x 00:29:41.885 [2024-11-17 19:38:40.044940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:41.885 [2024-11-17 19:38:40.045047] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.885 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.885 [2024-11-17 19:38:40.115416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:42.143 [2024-11-17 19:38:40.207993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:42.143 [2024-11-17 19:38:40.208162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.143 [2024-11-17 19:38:40.208182] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.143 [2024-11-17 19:38:40.208196] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.143 [2024-11-17 19:38:40.208280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.143 [2024-11-17 19:38:40.208463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.143 [2024-11-17 19:38:40.208466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.077 19:38:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:43.077 19:38:41 -- common/autotest_common.sh@862 -- # return 0 00:29:43.077 19:38:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:43.077 19:38:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:43.077 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:29:43.077 19:38:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.077 19:38:41 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:43.077 19:38:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.077 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:29:43.077 [2024-11-17 19:38:41.043775] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.077 19:38:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.077 19:38:41 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.077 19:38:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.077 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:29:43.077 Malloc0 00:29:43.077 19:38:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.077 19:38:41 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:43.077 19:38:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.077 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:29:43.077 19:38:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.077 19:38:41 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:43.077 19:38:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.077 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:29:43.077 19:38:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.077 19:38:41 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.077 19:38:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.077 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:29:43.077 [2024-11-17 19:38:41.101620] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.077 19:38:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.077 19:38:41 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:43.077 19:38:41 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:43.077 19:38:41 -- nvmf/common.sh@520 -- # config=() 00:29:43.077 19:38:41 -- nvmf/common.sh@520 -- # local subsystem config 00:29:43.077 19:38:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:43.077 19:38:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:43.077 { 00:29:43.077 "params": { 00:29:43.077 "name": "Nvme$subsystem", 00:29:43.077 "trtype": "$TEST_TRANSPORT", 00:29:43.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:43.077 "adrfam": "ipv4", 00:29:43.077 "trsvcid": "$NVMF_PORT", 00:29:43.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:43.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:43.077 "hdgst": ${hdgst:-false}, 00:29:43.077 "ddgst": ${ddgst:-false} 00:29:43.077 }, 00:29:43.077 "method": "bdev_nvme_attach_controller" 00:29:43.077 } 00:29:43.077 EOF 00:29:43.077 )") 00:29:43.077 19:38:41 -- nvmf/common.sh@542 -- # cat 00:29:43.077 19:38:41 -- nvmf/common.sh@544 -- # jq . 00:29:43.077 19:38:41 -- nvmf/common.sh@545 -- # IFS=, 00:29:43.077 19:38:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:43.077 "params": { 00:29:43.077 "name": "Nvme1", 00:29:43.077 "trtype": "tcp", 00:29:43.077 "traddr": "10.0.0.2", 00:29:43.077 "adrfam": "ipv4", 00:29:43.077 "trsvcid": "4420", 00:29:43.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:43.077 "hdgst": false, 00:29:43.077 "ddgst": false 00:29:43.077 }, 00:29:43.077 "method": "bdev_nvme_attach_controller" 00:29:43.077 }' 00:29:43.077 [2024-11-17 19:38:41.147980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:43.077 [2024-11-17 19:38:41.148044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326839 ] 00:29:43.077 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.077 [2024-11-17 19:38:41.207394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.077 [2024-11-17 19:38:41.295958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.336 Running I/O for 1 seconds... 00:29:44.711 00:29:44.711 Latency(us) 00:29:44.711 [2024-11-17T18:38:42.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.711 [2024-11-17T18:38:42.978Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:44.711 Verification LBA range: start 0x0 length 0x4000 00:29:44.711 Nvme1n1 : 1.01 12953.31 50.60 0.00 0.00 9833.39 1262.17 17670.45 00:29:44.711 [2024-11-17T18:38:42.978Z] =================================================================================================================== 00:29:44.711 [2024-11-17T18:38:42.978Z] Total : 12953.31 50.60 0.00 0.00 9833.39 1262.17 17670.45 00:29:44.711 19:38:42 -- host/bdevperf.sh@30 -- # bdevperfpid=1326991 00:29:44.711 19:38:42 -- host/bdevperf.sh@32 -- # sleep 3 00:29:44.711 19:38:42 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:44.711 19:38:42 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:44.711 19:38:42 -- nvmf/common.sh@520 -- # config=() 00:29:44.711 19:38:42 -- nvmf/common.sh@520 -- # local subsystem config 00:29:44.711 19:38:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:44.711 19:38:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:44.711 { 00:29:44.711 "params": { 00:29:44.711 "name": "Nvme$subsystem", 00:29:44.711 "trtype": "$TEST_TRANSPORT", 00:29:44.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.711 "adrfam": "ipv4", 00:29:44.711 "trsvcid": "$NVMF_PORT", 00:29:44.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.711 "hdgst": ${hdgst:-false}, 00:29:44.711 "ddgst": ${ddgst:-false} 00:29:44.711 }, 00:29:44.711 "method": "bdev_nvme_attach_controller" 00:29:44.711 } 00:29:44.711 EOF 00:29:44.711 )") 00:29:44.711 19:38:42 -- nvmf/common.sh@542 -- # cat 00:29:44.711 19:38:42 -- nvmf/common.sh@544 -- # jq . 00:29:44.711 19:38:42 -- nvmf/common.sh@545 -- # IFS=, 00:29:44.711 19:38:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:44.711 "params": { 00:29:44.711 "name": "Nvme1", 00:29:44.711 "trtype": "tcp", 00:29:44.711 "traddr": "10.0.0.2", 00:29:44.711 "adrfam": "ipv4", 00:29:44.711 "trsvcid": "4420", 00:29:44.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.711 "hdgst": false, 00:29:44.711 "ddgst": false 00:29:44.711 }, 00:29:44.711 "method": "bdev_nvme_attach_controller" 00:29:44.711 }' 00:29:44.711 [2024-11-17 19:38:42.849542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:44.711 [2024-11-17 19:38:42.849627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326991 ] 00:29:44.711 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.711 [2024-11-17 19:38:42.910137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.969 [2024-11-17 19:38:42.995501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.227 Running I/O for 15 seconds... 00:29:47.760 19:38:45 -- host/bdevperf.sh@33 -- # kill -9 1326680 00:29:47.760 19:38:45 -- host/bdevperf.sh@35 -- # sleep 3 00:29:47.760 [2024-11-17 19:38:45.823935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.823981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.760 [2024-11-17 19:38:45.824773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.760 [2024-11-17 19:38:45.824786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.824801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.824814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.824829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.824842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.824857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.824870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.824886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.824899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.824913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.824926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.824941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.824955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.824988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.761 [2024-11-17 19:38:45.825073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.761 [2024-11-17 19:38:45.825131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.761 [2024-11-17 19:38:45.825159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.761 [2024-11-17 19:38:45.825395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.761 [2024-11-17 19:38:45.825455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.761 [2024-11-17 19:38:45.825483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.761 [2024-11-17 19:38:45.825541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.761 [2024-11-17 19:38:45.825943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.761 [2024-11-17 19:38:45.825975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.761 [2024-11-17 19:38:45.825989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.826890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.826979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.826992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.827006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.827019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.827048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.827063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.827077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.827090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.827105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.762 [2024-11-17 19:38:45.827118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.827133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.827147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.827162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.762 [2024-11-17 19:38:45.827174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.762 [2024-11-17 19:38:45.827189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.763 [2024-11-17 19:38:45.827230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.763 [2024-11-17 19:38:45.827258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.763 [2024-11-17 19:38:45.827290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.763 [2024-11-17 19:38:45.827319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.763 [2024-11-17 19:38:45.827347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.763 [2024-11-17 19:38:45.827405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.763 [2024-11-17 19:38:45.827434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.763 [2024-11-17 19:38:45.827889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.827903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde49a0 is same with the state(5) to be set 00:29:47.763 [2024-11-17 19:38:45.827924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:47.763 [2024-11-17 19:38:45.827936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:47.763 [2024-11-17 19:38:45.827947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109584 len:8 PRP1 0x0 PRP2 0x0 00:29:47.763 [2024-11-17 19:38:45.827974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.763 [2024-11-17 19:38:45.828036] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde49a0 was disconnected and freed. reset controller. 00:29:47.763 [2024-11-17 19:38:45.830339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.763 [2024-11-17 19:38:45.830411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.763 [2024-11-17 19:38:45.830834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.763 [2024-11-17 19:38:45.830978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.763 [2024-11-17 19:38:45.831004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.763 [2024-11-17 19:38:45.831026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.763 [2024-11-17 19:38:45.831149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.763 [2024-11-17 19:38:45.831272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.763 [2024-11-17 19:38:45.831293] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.763 [2024-11-17 19:38:45.831309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.763 [2024-11-17 19:38:45.833756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.763 [2024-11-17 19:38:45.843051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.763 [2024-11-17 19:38:45.843485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.763 [2024-11-17 19:38:45.843637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.763 [2024-11-17 19:38:45.843666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.763 [2024-11-17 19:38:45.843694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.763 [2024-11-17 19:38:45.843896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.763 [2024-11-17 19:38:45.844065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.763 [2024-11-17 19:38:45.844088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.763 [2024-11-17 19:38:45.844103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.763 [2024-11-17 19:38:45.846471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.763 [2024-11-17 19:38:45.855696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.763 [2024-11-17 19:38:45.855998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.763 [2024-11-17 19:38:45.856133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.763 [2024-11-17 19:38:45.856162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.763 [2024-11-17 19:38:45.856180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.763 [2024-11-17 19:38:45.856271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.763 [2024-11-17 19:38:45.856421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.763 [2024-11-17 19:38:45.856444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.763 [2024-11-17 19:38:45.856459] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.763 [2024-11-17 19:38:45.858789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.764 [2024-11-17 19:38:45.868226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.764 [2024-11-17 19:38:45.868597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.868744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.868770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.764 [2024-11-17 19:38:45.868786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.764 [2024-11-17 19:38:45.868967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.764 [2024-11-17 19:38:45.869100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.764 [2024-11-17 19:38:45.869123] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.764 [2024-11-17 19:38:45.869138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.764 [2024-11-17 19:38:45.871505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.764 [2024-11-17 19:38:45.880830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.764 [2024-11-17 19:38:45.881140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.881269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.881296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.764 [2024-11-17 19:38:45.881314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.764 [2024-11-17 19:38:45.881460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.764 [2024-11-17 19:38:45.881610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.764 [2024-11-17 19:38:45.881633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.764 [2024-11-17 19:38:45.881648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.764 [2024-11-17 19:38:45.884064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.764 [2024-11-17 19:38:45.893168] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.764 [2024-11-17 19:38:45.893459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.893624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.893650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.764 [2024-11-17 19:38:45.893666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.764 [2024-11-17 19:38:45.893880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.764 [2024-11-17 19:38:45.894020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.764 [2024-11-17 19:38:45.894042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.764 [2024-11-17 19:38:45.894056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.764 [2024-11-17 19:38:45.896559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.764 [2024-11-17 19:38:45.905821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.764 [2024-11-17 19:38:45.906132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.906279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.906304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.764 [2024-11-17 19:38:45.906319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.764 [2024-11-17 19:38:45.906416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.764 [2024-11-17 19:38:45.906555] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.764 [2024-11-17 19:38:45.906579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.764 [2024-11-17 19:38:45.906594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.764 [2024-11-17 19:38:45.908874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.764 [2024-11-17 19:38:45.918285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.764 [2024-11-17 19:38:45.918609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.918713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.918743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.764 [2024-11-17 19:38:45.918761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.764 [2024-11-17 19:38:45.918926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.764 [2024-11-17 19:38:45.919094] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.764 [2024-11-17 19:38:45.919117] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.764 [2024-11-17 19:38:45.919132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.764 [2024-11-17 19:38:45.921606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.764 [2024-11-17 19:38:45.930971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.764 [2024-11-17 19:38:45.931225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.931338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.931389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.764 [2024-11-17 19:38:45.931408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.764 [2024-11-17 19:38:45.931572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.764 [2024-11-17 19:38:45.931770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.764 [2024-11-17 19:38:45.931794] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.764 [2024-11-17 19:38:45.931809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.764 [2024-11-17 19:38:45.934104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.764 [2024-11-17 19:38:45.943413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.764 [2024-11-17 19:38:45.943724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.943853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.764 [2024-11-17 19:38:45.943882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.764 [2024-11-17 19:38:45.943899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.764 [2024-11-17 19:38:45.944099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.764 [2024-11-17 19:38:45.944267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.764 [2024-11-17 19:38:45.944296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.764 [2024-11-17 19:38:45.944312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.764 [2024-11-17 19:38:45.946518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.764 [2024-11-17 19:38:45.956002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.765 [2024-11-17 19:38:45.956334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:45.956456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:45.956485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.765 [2024-11-17 19:38:45.956503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.765 [2024-11-17 19:38:45.956667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.765 [2024-11-17 19:38:45.956811] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.765 [2024-11-17 19:38:45.956835] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.765 [2024-11-17 19:38:45.956850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.765 [2024-11-17 19:38:45.959094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.765 [2024-11-17 19:38:45.968553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.765 [2024-11-17 19:38:45.968917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:45.969068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:45.969094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.765 [2024-11-17 19:38:45.969110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.765 [2024-11-17 19:38:45.969280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.765 [2024-11-17 19:38:45.969454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.765 [2024-11-17 19:38:45.969475] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.765 [2024-11-17 19:38:45.969489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.765 [2024-11-17 19:38:45.972017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.765 [2024-11-17 19:38:45.981044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.765 [2024-11-17 19:38:45.981351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:45.981481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:45.981509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.765 [2024-11-17 19:38:45.981526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.765 [2024-11-17 19:38:45.981738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.765 [2024-11-17 19:38:45.981871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.765 [2024-11-17 19:38:45.981894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.765 [2024-11-17 19:38:45.981914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.765 [2024-11-17 19:38:45.984282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.765 [2024-11-17 19:38:45.993611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.765 [2024-11-17 19:38:45.993947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:45.994053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:45.994081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.765 [2024-11-17 19:38:45.994098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.765 [2024-11-17 19:38:45.994244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.765 [2024-11-17 19:38:45.994431] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.765 [2024-11-17 19:38:45.994455] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.765 [2024-11-17 19:38:45.994470] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.765 [2024-11-17 19:38:45.996659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.765 [2024-11-17 19:38:46.006078] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.765 [2024-11-17 19:38:46.006370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:46.006500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:46.006528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.765 [2024-11-17 19:38:46.006545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.765 [2024-11-17 19:38:46.006705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.765 [2024-11-17 19:38:46.006875] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.765 [2024-11-17 19:38:46.006898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.765 [2024-11-17 19:38:46.006913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.765 [2024-11-17 19:38:46.009099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.765 [2024-11-17 19:38:46.018713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.765 [2024-11-17 19:38:46.019045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:46.019162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.765 [2024-11-17 19:38:46.019188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:47.765 [2024-11-17 19:38:46.019204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:47.765 [2024-11-17 19:38:46.019367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:47.765 [2024-11-17 19:38:46.019631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.765 [2024-11-17 19:38:46.019668] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.765 [2024-11-17 19:38:46.019710] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.765 [2024-11-17 19:38:46.022058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.025 [2024-11-17 19:38:46.031272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.025 [2024-11-17 19:38:46.031642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.031745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.031774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.025 [2024-11-17 19:38:46.031792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.025 [2024-11-17 19:38:46.031940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.025 [2024-11-17 19:38:46.032109] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.025 [2024-11-17 19:38:46.032132] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.025 [2024-11-17 19:38:46.032147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.025 [2024-11-17 19:38:46.034683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.025 [2024-11-17 19:38:46.043742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.025 [2024-11-17 19:38:46.044014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.044139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.044167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.025 [2024-11-17 19:38:46.044184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.025 [2024-11-17 19:38:46.044322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.025 [2024-11-17 19:38:46.044462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.025 [2024-11-17 19:38:46.044483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.025 [2024-11-17 19:38:46.044497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.025 [2024-11-17 19:38:46.046843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.025 [2024-11-17 19:38:46.056311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.025 [2024-11-17 19:38:46.056664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.056843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.056869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.025 [2024-11-17 19:38:46.056886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.025 [2024-11-17 19:38:46.057034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.025 [2024-11-17 19:38:46.057273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.025 [2024-11-17 19:38:46.057297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.025 [2024-11-17 19:38:46.057312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.025 [2024-11-17 19:38:46.059926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.025 [2024-11-17 19:38:46.068963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.025 [2024-11-17 19:38:46.069287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.069420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.069448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.025 [2024-11-17 19:38:46.069467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.025 [2024-11-17 19:38:46.069631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.025 [2024-11-17 19:38:46.069848] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.025 [2024-11-17 19:38:46.069873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.025 [2024-11-17 19:38:46.069888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.025 [2024-11-17 19:38:46.072346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.025 [2024-11-17 19:38:46.081614] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.025 [2024-11-17 19:38:46.081963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.082091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.082120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.025 [2024-11-17 19:38:46.082137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.025 [2024-11-17 19:38:46.082302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.025 [2024-11-17 19:38:46.082489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.025 [2024-11-17 19:38:46.082512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.025 [2024-11-17 19:38:46.082527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.025 [2024-11-17 19:38:46.085010] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.025 [2024-11-17 19:38:46.094207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.025 [2024-11-17 19:38:46.094511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.094633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.094661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.025 [2024-11-17 19:38:46.094687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.025 [2024-11-17 19:38:46.094819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.025 [2024-11-17 19:38:46.094969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.025 [2024-11-17 19:38:46.094992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.025 [2024-11-17 19:38:46.095008] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.025 [2024-11-17 19:38:46.097284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.025 [2024-11-17 19:38:46.106949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.025 [2024-11-17 19:38:46.107304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.107427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.107452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.025 [2024-11-17 19:38:46.107468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.025 [2024-11-17 19:38:46.107615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.025 [2024-11-17 19:38:46.107823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.025 [2024-11-17 19:38:46.107847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.025 [2024-11-17 19:38:46.107862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.025 [2024-11-17 19:38:46.110083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.025 [2024-11-17 19:38:46.119851] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.025 [2024-11-17 19:38:46.120129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.120287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.120316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.025 [2024-11-17 19:38:46.120333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.025 [2024-11-17 19:38:46.120479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.025 [2024-11-17 19:38:46.120665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.025 [2024-11-17 19:38:46.120699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.025 [2024-11-17 19:38:46.120716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.025 [2024-11-17 19:38:46.123083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.025 [2024-11-17 19:38:46.132404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.025 [2024-11-17 19:38:46.132682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.132833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.025 [2024-11-17 19:38:46.132862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.025 [2024-11-17 19:38:46.132879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.025 [2024-11-17 19:38:46.133007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.133193] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.133216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.133232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.135562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.026 [2024-11-17 19:38:46.145040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.026 [2024-11-17 19:38:46.145358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.145470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.145497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.026 [2024-11-17 19:38:46.145519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.026 [2024-11-17 19:38:46.145639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.145806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.145828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.145842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.147961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.026 [2024-11-17 19:38:46.157449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.026 [2024-11-17 19:38:46.157716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.157898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.157944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.026 [2024-11-17 19:38:46.157962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.026 [2024-11-17 19:38:46.158144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.158312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.158335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.158350] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.160712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.026 [2024-11-17 19:38:46.169947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.026 [2024-11-17 19:38:46.170280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.170427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.170453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.026 [2024-11-17 19:38:46.170469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.026 [2024-11-17 19:38:46.170605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.170829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.170851] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.170864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.173143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.026 [2024-11-17 19:38:46.182480] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.026 [2024-11-17 19:38:46.182842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.182980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.183009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.026 [2024-11-17 19:38:46.183026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.026 [2024-11-17 19:38:46.183214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.183365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.183387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.183402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.185632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.026 [2024-11-17 19:38:46.194950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.026 [2024-11-17 19:38:46.195257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.195413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.195442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.026 [2024-11-17 19:38:46.195459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.026 [2024-11-17 19:38:46.195641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.195799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.195820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.195833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.198124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.026 [2024-11-17 19:38:46.207366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.026 [2024-11-17 19:38:46.207687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.207824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.207849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.026 [2024-11-17 19:38:46.207865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.026 [2024-11-17 19:38:46.208093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.208267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.208289] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.208303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.210544] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.026 [2024-11-17 19:38:46.219929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.026 [2024-11-17 19:38:46.220226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.220359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.220385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.026 [2024-11-17 19:38:46.220401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.026 [2024-11-17 19:38:46.220613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.220822] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.220844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.220858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.222910] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.026 [2024-11-17 19:38:46.232644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.026 [2024-11-17 19:38:46.232963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.233064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.233092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.026 [2024-11-17 19:38:46.233110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.026 [2024-11-17 19:38:46.233256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.233443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.233466] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.233481] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.235911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.026 [2024-11-17 19:38:46.245093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.026 [2024-11-17 19:38:46.245402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.245541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.026 [2024-11-17 19:38:46.245566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.026 [2024-11-17 19:38:46.245598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.026 [2024-11-17 19:38:46.245717] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.026 [2024-11-17 19:38:46.245903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.026 [2024-11-17 19:38:46.245924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.026 [2024-11-17 19:38:46.245938] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.026 [2024-11-17 19:38:46.248407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.027 [2024-11-17 19:38:46.257491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.027 [2024-11-17 19:38:46.257849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.027 [2024-11-17 19:38:46.257974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.027 [2024-11-17 19:38:46.258021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.027 [2024-11-17 19:38:46.258037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.027 [2024-11-17 19:38:46.258206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.027 [2024-11-17 19:38:46.258347] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.027 [2024-11-17 19:38:46.258373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.027 [2024-11-17 19:38:46.258388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.027 [2024-11-17 19:38:46.260536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.027 [2024-11-17 19:38:46.269890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.027 [2024-11-17 19:38:46.270173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.027 [2024-11-17 19:38:46.270297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.027 [2024-11-17 19:38:46.270322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.027 [2024-11-17 19:38:46.270338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.027 [2024-11-17 19:38:46.270486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.027 [2024-11-17 19:38:46.270683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.027 [2024-11-17 19:38:46.270704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.027 [2024-11-17 19:38:46.270717] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.027 [2024-11-17 19:38:46.272690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.027 [2024-11-17 19:38:46.282430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.027 [2024-11-17 19:38:46.282854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.027 [2024-11-17 19:38:46.282948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.027 [2024-11-17 19:38:46.282992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.027 [2024-11-17 19:38:46.283007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.027 [2024-11-17 19:38:46.283200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.027 [2024-11-17 19:38:46.283369] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.027 [2024-11-17 19:38:46.283391] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.027 [2024-11-17 19:38:46.283406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.027 [2024-11-17 19:38:46.285770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.286 [2024-11-17 19:38:46.294945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.286 [2024-11-17 19:38:46.295297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.286 [2024-11-17 19:38:46.295433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.286 [2024-11-17 19:38:46.295463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.286 [2024-11-17 19:38:46.295482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.295648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.287 [2024-11-17 19:38:46.295773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.287 [2024-11-17 19:38:46.295798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.287 [2024-11-17 19:38:46.295819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.287 [2024-11-17 19:38:46.298280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.287 [2024-11-17 19:38:46.307423] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.287 [2024-11-17 19:38:46.307769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.307872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.307900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.287 [2024-11-17 19:38:46.307917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.308148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.287 [2024-11-17 19:38:46.308336] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.287 [2024-11-17 19:38:46.308359] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.287 [2024-11-17 19:38:46.308374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.287 [2024-11-17 19:38:46.310600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.287 [2024-11-17 19:38:46.319838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.287 [2024-11-17 19:38:46.320129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.320275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.320302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.287 [2024-11-17 19:38:46.320319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.320480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.287 [2024-11-17 19:38:46.320631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.287 [2024-11-17 19:38:46.320653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.287 [2024-11-17 19:38:46.320668] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.287 [2024-11-17 19:38:46.322887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.287 [2024-11-17 19:38:46.332255] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.287 [2024-11-17 19:38:46.332571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.332704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.332734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.287 [2024-11-17 19:38:46.332753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.332918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.287 [2024-11-17 19:38:46.333122] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.287 [2024-11-17 19:38:46.333145] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.287 [2024-11-17 19:38:46.333160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.287 [2024-11-17 19:38:46.335569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.287 [2024-11-17 19:38:46.344933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.287 [2024-11-17 19:38:46.345236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.345360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.345388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.287 [2024-11-17 19:38:46.345406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.345516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.287 [2024-11-17 19:38:46.345713] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.287 [2024-11-17 19:38:46.345738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.287 [2024-11-17 19:38:46.345753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.287 [2024-11-17 19:38:46.347868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.287 [2024-11-17 19:38:46.357602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.287 [2024-11-17 19:38:46.357932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.358026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.358054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.287 [2024-11-17 19:38:46.358072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.358183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.287 [2024-11-17 19:38:46.358368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.287 [2024-11-17 19:38:46.358391] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.287 [2024-11-17 19:38:46.358406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.287 [2024-11-17 19:38:46.360842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.287 [2024-11-17 19:38:46.370035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.287 [2024-11-17 19:38:46.370309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.370438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.370467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.287 [2024-11-17 19:38:46.370485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.370667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.287 [2024-11-17 19:38:46.370812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.287 [2024-11-17 19:38:46.370835] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.287 [2024-11-17 19:38:46.370850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.287 [2024-11-17 19:38:46.372949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.287 [2024-11-17 19:38:46.382664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.287 [2024-11-17 19:38:46.383007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.383136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.383165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.287 [2024-11-17 19:38:46.383182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.383311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.287 [2024-11-17 19:38:46.383479] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.287 [2024-11-17 19:38:46.383512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.287 [2024-11-17 19:38:46.383527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.287 [2024-11-17 19:38:46.385839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.287 [2024-11-17 19:38:46.395013] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.287 [2024-11-17 19:38:46.395364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.395494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.395523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.287 [2024-11-17 19:38:46.395540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.395650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.287 [2024-11-17 19:38:46.395843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.287 [2024-11-17 19:38:46.395864] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.287 [2024-11-17 19:38:46.395877] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.287 [2024-11-17 19:38:46.398210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.287 [2024-11-17 19:38:46.407539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.287 [2024-11-17 19:38:46.407887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.408042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.287 [2024-11-17 19:38:46.408071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.287 [2024-11-17 19:38:46.408087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.287 [2024-11-17 19:38:46.408234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.408366] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.408389] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.408404] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.410781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.288 [2024-11-17 19:38:46.420192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.288 [2024-11-17 19:38:46.420492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.420650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.420686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.288 [2024-11-17 19:38:46.420707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.288 [2024-11-17 19:38:46.420889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.421040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.421063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.421078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.423425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.288 [2024-11-17 19:38:46.432667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.288 [2024-11-17 19:38:46.432966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.433081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.433110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.288 [2024-11-17 19:38:46.433127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.288 [2024-11-17 19:38:46.433272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.433441] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.433464] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.433478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.435873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.288 [2024-11-17 19:38:46.445222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.288 [2024-11-17 19:38:46.445547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.445712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.445739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.288 [2024-11-17 19:38:46.445755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.288 [2024-11-17 19:38:46.445919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.446065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.446088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.446103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.448486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.288 [2024-11-17 19:38:46.457874] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.288 [2024-11-17 19:38:46.458285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.458447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.458477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.288 [2024-11-17 19:38:46.458494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.288 [2024-11-17 19:38:46.458669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.458837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.458860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.458875] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.461307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.288 [2024-11-17 19:38:46.470306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.288 [2024-11-17 19:38:46.470580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.470709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.470739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.288 [2024-11-17 19:38:46.470756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.288 [2024-11-17 19:38:46.470922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.471090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.471113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.471128] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.473431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.288 [2024-11-17 19:38:46.482905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.288 [2024-11-17 19:38:46.483246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.483420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.483449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.288 [2024-11-17 19:38:46.483466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.288 [2024-11-17 19:38:46.483631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.483808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.483832] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.483847] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.486087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.288 [2024-11-17 19:38:46.495778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.288 [2024-11-17 19:38:46.496166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.496388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.496437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.288 [2024-11-17 19:38:46.496460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.288 [2024-11-17 19:38:46.496660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.496841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.496865] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.496880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.499190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.288 [2024-11-17 19:38:46.508149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.288 [2024-11-17 19:38:46.508423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.508550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.508581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.288 [2024-11-17 19:38:46.508599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.288 [2024-11-17 19:38:46.508736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.508905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.508928] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.508943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.511206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.288 [2024-11-17 19:38:46.520748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.288 [2024-11-17 19:38:46.521041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.521170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.288 [2024-11-17 19:38:46.521199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.288 [2024-11-17 19:38:46.521217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.288 [2024-11-17 19:38:46.521418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.288 [2024-11-17 19:38:46.521569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.288 [2024-11-17 19:38:46.521591] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.288 [2024-11-17 19:38:46.521606] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.288 [2024-11-17 19:38:46.524088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.289 [2024-11-17 19:38:46.533430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.289 [2024-11-17 19:38:46.533780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.289 [2024-11-17 19:38:46.533885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.289 [2024-11-17 19:38:46.533915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.289 [2024-11-17 19:38:46.533933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.289 [2024-11-17 19:38:46.534157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.289 [2024-11-17 19:38:46.534326] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.289 [2024-11-17 19:38:46.534349] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.289 [2024-11-17 19:38:46.534363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.289 [2024-11-17 19:38:46.536722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.289 [2024-11-17 19:38:46.546225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.289 [2024-11-17 19:38:46.546535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.289 [2024-11-17 19:38:46.546691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.289 [2024-11-17 19:38:46.546720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.289 [2024-11-17 19:38:46.546738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.289 [2024-11-17 19:38:46.546938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.289 [2024-11-17 19:38:46.547160] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.289 [2024-11-17 19:38:46.547184] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.289 [2024-11-17 19:38:46.547198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.289 [2024-11-17 19:38:46.549597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.548 [2024-11-17 19:38:46.558890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.548 [2024-11-17 19:38:46.559231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.548 [2024-11-17 19:38:46.559355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.548 [2024-11-17 19:38:46.559385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.548 [2024-11-17 19:38:46.559403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.548 [2024-11-17 19:38:46.559551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.548 [2024-11-17 19:38:46.559768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.548 [2024-11-17 19:38:46.559793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.548 [2024-11-17 19:38:46.559808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.548 [2024-11-17 19:38:46.562264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.548 [2024-11-17 19:38:46.571613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.548 [2024-11-17 19:38:46.571929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.548 [2024-11-17 19:38:46.572055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.548 [2024-11-17 19:38:46.572083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.548 [2024-11-17 19:38:46.572101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.548 [2024-11-17 19:38:46.572212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.548 [2024-11-17 19:38:46.572349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.548 [2024-11-17 19:38:46.572372] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.548 [2024-11-17 19:38:46.572387] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.548 [2024-11-17 19:38:46.574762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.548 [2024-11-17 19:38:46.584377] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.548 [2024-11-17 19:38:46.584610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.548 [2024-11-17 19:38:46.584739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.584769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.584787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.585005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.585174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.585197] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.585212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.587711] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.597031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.597361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.597467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.597495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.597513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.597723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.597927] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.597950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.597966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.600171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.609718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.609972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.610164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.610217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.610234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.610434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.610602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.610630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.610646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.613095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.622375] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.622722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.622830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.622860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.622878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.623043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.623230] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.623253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.623268] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.625457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.635130] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.635399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.635525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.635553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.635571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.635747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.635898] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.635921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.635936] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.638328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.647842] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.648146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.648271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.648296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.648312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.648457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.648618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.648642] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.648662] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.651076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.660527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.660764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.660894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.660922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.660940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.661123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.661309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.661332] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.661347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.663624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.673121] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.673423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.673550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.673578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.673595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.673734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.673903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.673926] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.673942] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.676312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.685785] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.686107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.686237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.686266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.686284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.686449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.686654] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.686686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.686719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.688974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.698465] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.698764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.698889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.698915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.549 [2024-11-17 19:38:46.698931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.549 [2024-11-17 19:38:46.699109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.549 [2024-11-17 19:38:46.699241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.549 [2024-11-17 19:38:46.699264] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.549 [2024-11-17 19:38:46.699279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.549 [2024-11-17 19:38:46.701597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.549 [2024-11-17 19:38:46.710897] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.549 [2024-11-17 19:38:46.711230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.711332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.549 [2024-11-17 19:38:46.711361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.550 [2024-11-17 19:38:46.711378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.550 [2024-11-17 19:38:46.711543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.550 [2024-11-17 19:38:46.711684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.550 [2024-11-17 19:38:46.711730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.550 [2024-11-17 19:38:46.711744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.550 [2024-11-17 19:38:46.713989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.550 [2024-11-17 19:38:46.723627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.550 [2024-11-17 19:38:46.724034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.724187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.724216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.550 [2024-11-17 19:38:46.724233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.550 [2024-11-17 19:38:46.724397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.550 [2024-11-17 19:38:46.724565] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.550 [2024-11-17 19:38:46.724588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.550 [2024-11-17 19:38:46.724603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.550 [2024-11-17 19:38:46.726926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.550 [2024-11-17 19:38:46.736130] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.550 [2024-11-17 19:38:46.736434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.736588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.736617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.550 [2024-11-17 19:38:46.736634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.550 [2024-11-17 19:38:46.736812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.550 [2024-11-17 19:38:46.736945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.550 [2024-11-17 19:38:46.736968] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.550 [2024-11-17 19:38:46.736984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.550 [2024-11-17 19:38:46.739333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.550 [2024-11-17 19:38:46.748718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.550 [2024-11-17 19:38:46.749010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.749144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.749173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.550 [2024-11-17 19:38:46.749190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.550 [2024-11-17 19:38:46.749372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.550 [2024-11-17 19:38:46.749581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.550 [2024-11-17 19:38:46.749605] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.550 [2024-11-17 19:38:46.749620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.550 [2024-11-17 19:38:46.751945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.550 [2024-11-17 19:38:46.761198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.550 [2024-11-17 19:38:46.761483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.761640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.761668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.550 [2024-11-17 19:38:46.761699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.550 [2024-11-17 19:38:46.761829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.550 [2024-11-17 19:38:46.761998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.550 [2024-11-17 19:38:46.762021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.550 [2024-11-17 19:38:46.762036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.550 [2024-11-17 19:38:46.764313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.550 [2024-11-17 19:38:46.773750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.550 [2024-11-17 19:38:46.773957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.774196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.774253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.550 [2024-11-17 19:38:46.774270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.550 [2024-11-17 19:38:46.774488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.550 [2024-11-17 19:38:46.774686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.550 [2024-11-17 19:38:46.774710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.550 [2024-11-17 19:38:46.774725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.550 [2024-11-17 19:38:46.777091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.550 [2024-11-17 19:38:46.786172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.550 [2024-11-17 19:38:46.786463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.786595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.786623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.550 [2024-11-17 19:38:46.786641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.550 [2024-11-17 19:38:46.786835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.550 [2024-11-17 19:38:46.787023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.550 [2024-11-17 19:38:46.787047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.550 [2024-11-17 19:38:46.787062] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.550 [2024-11-17 19:38:46.789195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.550 [2024-11-17 19:38:46.798585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.550 [2024-11-17 19:38:46.798906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.799064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.799092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.550 [2024-11-17 19:38:46.799110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.550 [2024-11-17 19:38:46.799256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.550 [2024-11-17 19:38:46.799389] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.550 [2024-11-17 19:38:46.799412] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.550 [2024-11-17 19:38:46.799427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.550 [2024-11-17 19:38:46.801672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.550 [2024-11-17 19:38:46.811261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.550 [2024-11-17 19:38:46.811558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.811689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.550 [2024-11-17 19:38:46.811720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.550 [2024-11-17 19:38:46.811744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.550 [2024-11-17 19:38:46.811958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.550 [2024-11-17 19:38:46.812167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.550 [2024-11-17 19:38:46.812194] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.550 [2024-11-17 19:38:46.812209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.810 [2024-11-17 19:38:46.814846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.810 [2024-11-17 19:38:46.823802] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.810 [2024-11-17 19:38:46.824147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.810 [2024-11-17 19:38:46.824285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.810 [2024-11-17 19:38:46.824314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.810 [2024-11-17 19:38:46.824333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.810 [2024-11-17 19:38:46.824516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.810 [2024-11-17 19:38:46.824665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.810 [2024-11-17 19:38:46.824704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.810 [2024-11-17 19:38:46.824720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.810 [2024-11-17 19:38:46.827196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.810 [2024-11-17 19:38:46.836383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.810 [2024-11-17 19:38:46.836665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.810 [2024-11-17 19:38:46.836801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.810 [2024-11-17 19:38:46.836830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.810 [2024-11-17 19:38:46.836848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.810 [2024-11-17 19:38:46.836995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.810 [2024-11-17 19:38:46.837164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.810 [2024-11-17 19:38:46.837187] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.810 [2024-11-17 19:38:46.837202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.810 [2024-11-17 19:38:46.839605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.810 [2024-11-17 19:38:46.849048] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.810 [2024-11-17 19:38:46.849293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.810 [2024-11-17 19:38:46.849449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.810 [2024-11-17 19:38:46.849478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.810 [2024-11-17 19:38:46.849495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.810 [2024-11-17 19:38:46.849685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.810 [2024-11-17 19:38:46.849855] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.810 [2024-11-17 19:38:46.849878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.810 [2024-11-17 19:38:46.849893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.810 [2024-11-17 19:38:46.852224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.810 [2024-11-17 19:38:46.861799] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.810 [2024-11-17 19:38:46.862107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.810 [2024-11-17 19:38:46.862231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.810 [2024-11-17 19:38:46.862259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.810 [2024-11-17 19:38:46.862277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.810 [2024-11-17 19:38:46.862442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.810 [2024-11-17 19:38:46.862574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.810 [2024-11-17 19:38:46.862597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.810 [2024-11-17 19:38:46.862612] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.810 [2024-11-17 19:38:46.864935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.874350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.874661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.874779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.874807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.811 [2024-11-17 19:38:46.874824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.811 [2024-11-17 19:38:46.875060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.811 [2024-11-17 19:38:46.875211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.811 [2024-11-17 19:38:46.875234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.811 [2024-11-17 19:38:46.875249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.811 [2024-11-17 19:38:46.877384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.886982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.887287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.887458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.887484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.811 [2024-11-17 19:38:46.887499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.811 [2024-11-17 19:38:46.887704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.811 [2024-11-17 19:38:46.887855] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.811 [2024-11-17 19:38:46.887879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.811 [2024-11-17 19:38:46.887894] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.811 [2024-11-17 19:38:46.890279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.899653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.899956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.900101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.900127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.811 [2024-11-17 19:38:46.900143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.811 [2024-11-17 19:38:46.900303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.811 [2024-11-17 19:38:46.900482] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.811 [2024-11-17 19:38:46.900506] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.811 [2024-11-17 19:38:46.900521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.811 [2024-11-17 19:38:46.902758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.912159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.912449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.912605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.912633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.811 [2024-11-17 19:38:46.912650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.811 [2024-11-17 19:38:46.912790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.811 [2024-11-17 19:38:46.912959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.811 [2024-11-17 19:38:46.912982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.811 [2024-11-17 19:38:46.912998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.811 [2024-11-17 19:38:46.915437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.924810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.925138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.925291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.925317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.811 [2024-11-17 19:38:46.925332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.811 [2024-11-17 19:38:46.925471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.811 [2024-11-17 19:38:46.925658] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.811 [2024-11-17 19:38:46.925698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.811 [2024-11-17 19:38:46.925716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.811 [2024-11-17 19:38:46.928084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.937341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.937632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.937798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.937827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.811 [2024-11-17 19:38:46.937844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.811 [2024-11-17 19:38:46.937991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.811 [2024-11-17 19:38:46.938141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.811 [2024-11-17 19:38:46.938164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.811 [2024-11-17 19:38:46.938180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.811 [2024-11-17 19:38:46.940513] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.950068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.950384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.950535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.950565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.811 [2024-11-17 19:38:46.950582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.811 [2024-11-17 19:38:46.950778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.811 [2024-11-17 19:38:46.950912] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.811 [2024-11-17 19:38:46.950935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.811 [2024-11-17 19:38:46.950950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.811 [2024-11-17 19:38:46.953261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.962744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.963063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.963306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.963358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.811 [2024-11-17 19:38:46.963376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.811 [2024-11-17 19:38:46.963575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.811 [2024-11-17 19:38:46.963755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.811 [2024-11-17 19:38:46.963779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.811 [2024-11-17 19:38:46.963800] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.811 [2024-11-17 19:38:46.965915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.975290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.975656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.975759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.975785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.811 [2024-11-17 19:38:46.975800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.811 [2024-11-17 19:38:46.975953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.811 [2024-11-17 19:38:46.976140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.811 [2024-11-17 19:38:46.976164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.811 [2024-11-17 19:38:46.976179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.811 [2024-11-17 19:38:46.978514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.811 [2024-11-17 19:38:46.987754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.811 [2024-11-17 19:38:46.988049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.988194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.811 [2024-11-17 19:38:46.988239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.812 [2024-11-17 19:38:46.988257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.812 [2024-11-17 19:38:46.988403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.812 [2024-11-17 19:38:46.988553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.812 [2024-11-17 19:38:46.988576] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.812 [2024-11-17 19:38:46.988591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.812 [2024-11-17 19:38:46.990842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.812 [2024-11-17 19:38:47.000157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.812 [2024-11-17 19:38:47.000455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.000585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.000614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.812 [2024-11-17 19:38:47.000631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.812 [2024-11-17 19:38:47.000808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.812 [2024-11-17 19:38:47.000979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.812 [2024-11-17 19:38:47.001002] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.812 [2024-11-17 19:38:47.001017] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.812 [2024-11-17 19:38:47.003480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.812 [2024-11-17 19:38:47.012846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.812 [2024-11-17 19:38:47.013216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.013346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.013375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.812 [2024-11-17 19:38:47.013392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.812 [2024-11-17 19:38:47.013557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.812 [2024-11-17 19:38:47.013703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.812 [2024-11-17 19:38:47.013727] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.812 [2024-11-17 19:38:47.013742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.812 [2024-11-17 19:38:47.016110] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.812 [2024-11-17 19:38:47.025405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.812 [2024-11-17 19:38:47.025733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.025864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.025890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.812 [2024-11-17 19:38:47.025906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.812 [2024-11-17 19:38:47.026095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.812 [2024-11-17 19:38:47.026300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.812 [2024-11-17 19:38:47.026323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.812 [2024-11-17 19:38:47.026338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.812 [2024-11-17 19:38:47.028453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.812 [2024-11-17 19:38:47.037776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.812 [2024-11-17 19:38:47.038047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.038166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.038194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.812 [2024-11-17 19:38:47.038212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.812 [2024-11-17 19:38:47.038340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.812 [2024-11-17 19:38:47.038490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.812 [2024-11-17 19:38:47.038513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.812 [2024-11-17 19:38:47.038527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.812 [2024-11-17 19:38:47.040886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.812 [2024-11-17 19:38:47.050517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.812 [2024-11-17 19:38:47.050829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.050984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.051012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.812 [2024-11-17 19:38:47.051030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.812 [2024-11-17 19:38:47.051158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.812 [2024-11-17 19:38:47.051380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.812 [2024-11-17 19:38:47.051403] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.812 [2024-11-17 19:38:47.051418] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.812 [2024-11-17 19:38:47.053759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.812 [2024-11-17 19:38:47.063123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.812 [2024-11-17 19:38:47.063499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.063626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.812 [2024-11-17 19:38:47.063655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:48.812 [2024-11-17 19:38:47.063672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:48.812 [2024-11-17 19:38:47.063850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:48.812 [2024-11-17 19:38:47.064000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.812 [2024-11-17 19:38:47.064023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.812 [2024-11-17 19:38:47.064038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.812 [2024-11-17 19:38:47.066334] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.072 [2024-11-17 19:38:47.075827] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.072 [2024-11-17 19:38:47.076197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.076333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.076363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.072 [2024-11-17 19:38:47.076382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.072 [2024-11-17 19:38:47.076548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.072 [2024-11-17 19:38:47.076711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.072 [2024-11-17 19:38:47.076736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.072 [2024-11-17 19:38:47.076751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.072 [2024-11-17 19:38:47.079190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.072 [2024-11-17 19:38:47.088372] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.072 [2024-11-17 19:38:47.088733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.088868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.088898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.072 [2024-11-17 19:38:47.088917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.072 [2024-11-17 19:38:47.089100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.072 [2024-11-17 19:38:47.089305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.072 [2024-11-17 19:38:47.089328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.072 [2024-11-17 19:38:47.089343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.072 [2024-11-17 19:38:47.091552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.072 [2024-11-17 19:38:47.100913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.072 [2024-11-17 19:38:47.101239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.101373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.101402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.072 [2024-11-17 19:38:47.101420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.072 [2024-11-17 19:38:47.101585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.072 [2024-11-17 19:38:47.101748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.072 [2024-11-17 19:38:47.101772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.072 [2024-11-17 19:38:47.101788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.072 [2024-11-17 19:38:47.104154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.072 [2024-11-17 19:38:47.113215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.072 [2024-11-17 19:38:47.113488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.113617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.113645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.072 [2024-11-17 19:38:47.113663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.072 [2024-11-17 19:38:47.113821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.072 [2024-11-17 19:38:47.113990] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.072 [2024-11-17 19:38:47.114013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.072 [2024-11-17 19:38:47.114029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.072 [2024-11-17 19:38:47.116363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.072 [2024-11-17 19:38:47.125838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.072 [2024-11-17 19:38:47.126160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.126283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.126318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.072 [2024-11-17 19:38:47.126337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.072 [2024-11-17 19:38:47.126465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.072 [2024-11-17 19:38:47.126597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.072 [2024-11-17 19:38:47.126620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.072 [2024-11-17 19:38:47.126634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.072 [2024-11-17 19:38:47.128869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.072 [2024-11-17 19:38:47.138461] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.072 [2024-11-17 19:38:47.138755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.138891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.138919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.072 [2024-11-17 19:38:47.138936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.072 [2024-11-17 19:38:47.139101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.072 [2024-11-17 19:38:47.139269] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.072 [2024-11-17 19:38:47.139292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.072 [2024-11-17 19:38:47.139307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.072 [2024-11-17 19:38:47.141736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.072 [2024-11-17 19:38:47.150988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.072 [2024-11-17 19:38:47.151319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.151472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.072 [2024-11-17 19:38:47.151500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.072 [2024-11-17 19:38:47.151517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.072 [2024-11-17 19:38:47.151747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.072 [2024-11-17 19:38:47.151934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.072 [2024-11-17 19:38:47.151957] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.151972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.154351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.163667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.163995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.164125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.164166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.073 [2024-11-17 19:38:47.164186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.073 [2024-11-17 19:38:47.164318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.073 [2024-11-17 19:38:47.164492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.073 [2024-11-17 19:38:47.164515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.164530] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.166834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.176340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.176663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.176799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.176827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.073 [2024-11-17 19:38:47.176844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.073 [2024-11-17 19:38:47.177009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.073 [2024-11-17 19:38:47.177195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.073 [2024-11-17 19:38:47.177218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.177233] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.179718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.189042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.189387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.189511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.189538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.073 [2024-11-17 19:38:47.189556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.073 [2024-11-17 19:38:47.189731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.073 [2024-11-17 19:38:47.189937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.073 [2024-11-17 19:38:47.189960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.189976] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.192197] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.201591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.201913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.202051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.202079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.073 [2024-11-17 19:38:47.202096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.073 [2024-11-17 19:38:47.202266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.073 [2024-11-17 19:38:47.202435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.073 [2024-11-17 19:38:47.202458] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.202473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.204760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.214434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.214703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.214900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.214928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.073 [2024-11-17 19:38:47.214946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.073 [2024-11-17 19:38:47.215146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.073 [2024-11-17 19:38:47.215350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.073 [2024-11-17 19:38:47.215373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.215388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.217612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.227124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.227440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.227595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.227624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.073 [2024-11-17 19:38:47.227641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.073 [2024-11-17 19:38:47.227834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.073 [2024-11-17 19:38:47.228004] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.073 [2024-11-17 19:38:47.228027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.228042] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.230317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.239652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.239970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.240153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.240204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.073 [2024-11-17 19:38:47.240222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.073 [2024-11-17 19:38:47.240386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.073 [2024-11-17 19:38:47.240596] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.073 [2024-11-17 19:38:47.240620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.240634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.242814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.252278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.252532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.252652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.252689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.073 [2024-11-17 19:38:47.252709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.073 [2024-11-17 19:38:47.252928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.073 [2024-11-17 19:38:47.253078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.073 [2024-11-17 19:38:47.253101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.253116] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.255555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.264908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.265207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.265332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.073 [2024-11-17 19:38:47.265358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.073 [2024-11-17 19:38:47.265373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.073 [2024-11-17 19:38:47.265512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.073 [2024-11-17 19:38:47.265663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.073 [2024-11-17 19:38:47.265697] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.073 [2024-11-17 19:38:47.265713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.073 [2024-11-17 19:38:47.268007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.073 [2024-11-17 19:38:47.277687] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.073 [2024-11-17 19:38:47.278033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.278127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.278156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.074 [2024-11-17 19:38:47.278173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.074 [2024-11-17 19:38:47.278319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.074 [2024-11-17 19:38:47.278470] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.074 [2024-11-17 19:38:47.278501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.074 [2024-11-17 19:38:47.278517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.074 [2024-11-17 19:38:47.280803] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.074 [2024-11-17 19:38:47.290381] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.074 [2024-11-17 19:38:47.290717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.290846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.290874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.074 [2024-11-17 19:38:47.290892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.074 [2024-11-17 19:38:47.291038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.074 [2024-11-17 19:38:47.291225] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.074 [2024-11-17 19:38:47.291248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.074 [2024-11-17 19:38:47.291263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.074 [2024-11-17 19:38:47.293561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.074 [2024-11-17 19:38:47.302940] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.074 [2024-11-17 19:38:47.303226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.303334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.303362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.074 [2024-11-17 19:38:47.303379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.074 [2024-11-17 19:38:47.303526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.074 [2024-11-17 19:38:47.303724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.074 [2024-11-17 19:38:47.303748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.074 [2024-11-17 19:38:47.303763] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.074 [2024-11-17 19:38:47.306128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.074 [2024-11-17 19:38:47.315563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.074 [2024-11-17 19:38:47.315931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.316032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.316058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.074 [2024-11-17 19:38:47.316073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.074 [2024-11-17 19:38:47.316248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.074 [2024-11-17 19:38:47.316416] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.074 [2024-11-17 19:38:47.316439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.074 [2024-11-17 19:38:47.316460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.074 [2024-11-17 19:38:47.318707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.074 [2024-11-17 19:38:47.328194] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.074 [2024-11-17 19:38:47.328492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.328643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.074 [2024-11-17 19:38:47.328671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.074 [2024-11-17 19:38:47.328697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.074 [2024-11-17 19:38:47.328891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.074 [2024-11-17 19:38:47.329054] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.074 [2024-11-17 19:38:47.329076] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.074 [2024-11-17 19:38:47.329091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.074 [2024-11-17 19:38:47.331080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.342 [2024-11-17 19:38:47.340420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.342 [2024-11-17 19:38:47.340702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.342 [2024-11-17 19:38:47.340807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.342 [2024-11-17 19:38:47.340834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.343 [2024-11-17 19:38:47.340851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.343 [2024-11-17 19:38:47.340967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.343 [2024-11-17 19:38:47.341119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.343 [2024-11-17 19:38:47.341140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.343 [2024-11-17 19:38:47.341155] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.343 [2024-11-17 19:38:47.343295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.343 [2024-11-17 19:38:47.353132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.343 [2024-11-17 19:38:47.353510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.343 [2024-11-17 19:38:47.353641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.343 [2024-11-17 19:38:47.353670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.343 [2024-11-17 19:38:47.353702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.343 [2024-11-17 19:38:47.353850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.343 [2024-11-17 19:38:47.354006] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.343 [2024-11-17 19:38:47.354029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.343 [2024-11-17 19:38:47.354044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.343 [2024-11-17 19:38:47.356410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.343 [2024-11-17 19:38:47.365934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.343 [2024-11-17 19:38:47.366301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.343 [2024-11-17 19:38:47.366424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.343 [2024-11-17 19:38:47.366450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.343 [2024-11-17 19:38:47.366466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.343 [2024-11-17 19:38:47.366700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.343 [2024-11-17 19:38:47.366851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.343 [2024-11-17 19:38:47.366874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.343 [2024-11-17 19:38:47.366889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.343 [2024-11-17 19:38:47.369122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.343 [2024-11-17 19:38:47.378522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.343 [2024-11-17 19:38:47.378783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.343 [2024-11-17 19:38:47.378912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.343 [2024-11-17 19:38:47.378940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.343 [2024-11-17 19:38:47.378959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.343 [2024-11-17 19:38:47.379123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.343 [2024-11-17 19:38:47.379273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.343 [2024-11-17 19:38:47.379296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.343 [2024-11-17 19:38:47.379310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.343 [2024-11-17 19:38:47.381498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.343 [2024-11-17 19:38:47.391114] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.343 [2024-11-17 19:38:47.391424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.343 [2024-11-17 19:38:47.391523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.343 [2024-11-17 19:38:47.391552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.343 [2024-11-17 19:38:47.391570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.343 [2024-11-17 19:38:47.391742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.343 [2024-11-17 19:38:47.391908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.344 [2024-11-17 19:38:47.391929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.344 [2024-11-17 19:38:47.391942] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.344 [2024-11-17 19:38:47.394342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.344 [2024-11-17 19:38:47.403457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.344 [2024-11-17 19:38:47.403791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.344 [2024-11-17 19:38:47.403952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.344 [2024-11-17 19:38:47.403981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.344 [2024-11-17 19:38:47.403999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.344 [2024-11-17 19:38:47.404146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.344 [2024-11-17 19:38:47.404350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.344 [2024-11-17 19:38:47.404373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.344 [2024-11-17 19:38:47.404388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.344 [2024-11-17 19:38:47.406822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.344 [2024-11-17 19:38:47.415816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.344 [2024-11-17 19:38:47.416125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.344 [2024-11-17 19:38:47.416278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.344 [2024-11-17 19:38:47.416307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.344 [2024-11-17 19:38:47.416325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.344 [2024-11-17 19:38:47.416489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.344 [2024-11-17 19:38:47.416658] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.344 [2024-11-17 19:38:47.416693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.344 [2024-11-17 19:38:47.416723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.344 [2024-11-17 19:38:47.419082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.344 [2024-11-17 19:38:47.428342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.344 [2024-11-17 19:38:47.428688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.345 [2024-11-17 19:38:47.428841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.345 [2024-11-17 19:38:47.428870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.345 [2024-11-17 19:38:47.428887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.345 [2024-11-17 19:38:47.429088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.345 [2024-11-17 19:38:47.429310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.345 [2024-11-17 19:38:47.429333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.345 [2024-11-17 19:38:47.429348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.345 [2024-11-17 19:38:47.431761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.345 [2024-11-17 19:38:47.440976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.345 [2024-11-17 19:38:47.441269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.345 [2024-11-17 19:38:47.441403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.345 [2024-11-17 19:38:47.441432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.346 [2024-11-17 19:38:47.441449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.346 [2024-11-17 19:38:47.441578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.346 [2024-11-17 19:38:47.441787] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.346 [2024-11-17 19:38:47.441811] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.346 [2024-11-17 19:38:47.441826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.346 [2024-11-17 19:38:47.444301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.346 [2024-11-17 19:38:47.453446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.346 [2024-11-17 19:38:47.453750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.346 [2024-11-17 19:38:47.453876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.346 [2024-11-17 19:38:47.453905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.346 [2024-11-17 19:38:47.453922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.346 [2024-11-17 19:38:47.454069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.346 [2024-11-17 19:38:47.454237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.346 [2024-11-17 19:38:47.454260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.346 [2024-11-17 19:38:47.454275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.346 [2024-11-17 19:38:47.456567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.346 [2024-11-17 19:38:47.466061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.346 [2024-11-17 19:38:47.466393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.346 [2024-11-17 19:38:47.466560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.346 [2024-11-17 19:38:47.466588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.347 [2024-11-17 19:38:47.466605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.347 [2024-11-17 19:38:47.466757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.347 [2024-11-17 19:38:47.466937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.347 [2024-11-17 19:38:47.466974] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.347 [2024-11-17 19:38:47.466987] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.347 [2024-11-17 19:38:47.469272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.347 [2024-11-17 19:38:47.478574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.347 [2024-11-17 19:38:47.478922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.347 [2024-11-17 19:38:47.479062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.347 [2024-11-17 19:38:47.479088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.347 [2024-11-17 19:38:47.479109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.347 [2024-11-17 19:38:47.479321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.347 [2024-11-17 19:38:47.479472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.347 [2024-11-17 19:38:47.479495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.347 [2024-11-17 19:38:47.479509] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.347 [2024-11-17 19:38:47.481646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.347 [2024-11-17 19:38:47.491330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.347 [2024-11-17 19:38:47.491646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.347 [2024-11-17 19:38:47.491784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.347 [2024-11-17 19:38:47.491813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.347 [2024-11-17 19:38:47.491831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.348 [2024-11-17 19:38:47.491978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.348 [2024-11-17 19:38:47.492146] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.348 [2024-11-17 19:38:47.492169] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.348 [2024-11-17 19:38:47.492184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.348 [2024-11-17 19:38:47.494410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.348 [2024-11-17 19:38:47.504021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.348 [2024-11-17 19:38:47.504381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.348 [2024-11-17 19:38:47.504500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.348 [2024-11-17 19:38:47.504528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.348 [2024-11-17 19:38:47.504545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.348 [2024-11-17 19:38:47.504720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.348 [2024-11-17 19:38:47.504897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.348 [2024-11-17 19:38:47.504918] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.348 [2024-11-17 19:38:47.504932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.348 [2024-11-17 19:38:47.507489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.348 [2024-11-17 19:38:47.516743] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.348 [2024-11-17 19:38:47.517074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.348 [2024-11-17 19:38:47.517196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.348 [2024-11-17 19:38:47.517237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.348 [2024-11-17 19:38:47.517253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.348 [2024-11-17 19:38:47.517357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.348 [2024-11-17 19:38:47.517525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.348 [2024-11-17 19:38:47.517549] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.348 [2024-11-17 19:38:47.517564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.348 [2024-11-17 19:38:47.519920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.348 [2024-11-17 19:38:47.529425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.348 [2024-11-17 19:38:47.529701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.348 [2024-11-17 19:38:47.529847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.349 [2024-11-17 19:38:47.529872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.349 [2024-11-17 19:38:47.529889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.349 [2024-11-17 19:38:47.530082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.349 [2024-11-17 19:38:47.530269] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.349 [2024-11-17 19:38:47.530291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.349 [2024-11-17 19:38:47.530306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.349 [2024-11-17 19:38:47.532697] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.349 [2024-11-17 19:38:47.541936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.349 [2024-11-17 19:38:47.542262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.349 [2024-11-17 19:38:47.542386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.349 [2024-11-17 19:38:47.542415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.349 [2024-11-17 19:38:47.542433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.350 [2024-11-17 19:38:47.542633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.350 [2024-11-17 19:38:47.542831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.350 [2024-11-17 19:38:47.542856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.350 [2024-11-17 19:38:47.542871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.350 [2024-11-17 19:38:47.545136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.350 [2024-11-17 19:38:47.554480] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.350 [2024-11-17 19:38:47.554827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.350 [2024-11-17 19:38:47.554925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.350 [2024-11-17 19:38:47.554953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.350 [2024-11-17 19:38:47.554971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.350 [2024-11-17 19:38:47.555116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.350 [2024-11-17 19:38:47.555292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.350 [2024-11-17 19:38:47.555315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.350 [2024-11-17 19:38:47.555330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.350 [2024-11-17 19:38:47.557599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.350 [2024-11-17 19:38:47.566986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.350 [2024-11-17 19:38:47.567259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.350 [2024-11-17 19:38:47.567390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.350 [2024-11-17 19:38:47.567432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.350 [2024-11-17 19:38:47.567448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.350 [2024-11-17 19:38:47.567563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.350 [2024-11-17 19:38:47.567754] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.350 [2024-11-17 19:38:47.567775] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.350 [2024-11-17 19:38:47.567802] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.350 [2024-11-17 19:38:47.570177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.350 [2024-11-17 19:38:47.579625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.350 [2024-11-17 19:38:47.580001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.350 [2024-11-17 19:38:47.580174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.350 [2024-11-17 19:38:47.580203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.350 [2024-11-17 19:38:47.580221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.350 [2024-11-17 19:38:47.580350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.351 [2024-11-17 19:38:47.580483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.351 [2024-11-17 19:38:47.580507] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.351 [2024-11-17 19:38:47.580522] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.351 [2024-11-17 19:38:47.583064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.351 [2024-11-17 19:38:47.592191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.351 [2024-11-17 19:38:47.592485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-17 19:38:47.592593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-17 19:38:47.592621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-17 19:38:47.592638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.351 [2024-11-17 19:38:47.592743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.351 [2024-11-17 19:38:47.592894] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.351 [2024-11-17 19:38:47.592924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.352 [2024-11-17 19:38:47.592941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.352 [2024-11-17 19:38:47.595342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.617 [2024-11-17 19:38:47.604911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.617 [2024-11-17 19:38:47.605219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.617 [2024-11-17 19:38:47.605351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.617 [2024-11-17 19:38:47.605381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.617 [2024-11-17 19:38:47.605400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.617 [2024-11-17 19:38:47.605567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.617 [2024-11-17 19:38:47.605748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.617 [2024-11-17 19:38:47.605773] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.617 [2024-11-17 19:38:47.605789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.617 [2024-11-17 19:38:47.608251] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.617 [2024-11-17 19:38:47.617566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.617 [2024-11-17 19:38:47.617877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.617 [2024-11-17 19:38:47.617981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.617 [2024-11-17 19:38:47.618010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.618028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.618 [2024-11-17 19:38:47.618210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.618 [2024-11-17 19:38:47.618380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.618 [2024-11-17 19:38:47.618404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.618 [2024-11-17 19:38:47.618419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.618 [2024-11-17 19:38:47.620750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.618 [2024-11-17 19:38:47.630117] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.618 [2024-11-17 19:38:47.630507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.630639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.630669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.630701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.618 [2024-11-17 19:38:47.630869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.618 [2024-11-17 19:38:47.631019] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.618 [2024-11-17 19:38:47.631042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.618 [2024-11-17 19:38:47.631064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.618 [2024-11-17 19:38:47.633241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.618 [2024-11-17 19:38:47.642794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.618 [2024-11-17 19:38:47.643117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.643242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.643270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.643289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.618 [2024-11-17 19:38:47.643455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.618 [2024-11-17 19:38:47.643605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.618 [2024-11-17 19:38:47.643631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.618 [2024-11-17 19:38:47.643647] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.618 [2024-11-17 19:38:47.646012] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.618 [2024-11-17 19:38:47.655354] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.618 [2024-11-17 19:38:47.655694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.655838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.655868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.655887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.618 [2024-11-17 19:38:47.656070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.618 [2024-11-17 19:38:47.656257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.618 [2024-11-17 19:38:47.656282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.618 [2024-11-17 19:38:47.656298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.618 [2024-11-17 19:38:47.658649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.618 [2024-11-17 19:38:47.667849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.618 [2024-11-17 19:38:47.668161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.668302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.668339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.668359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.618 [2024-11-17 19:38:47.668542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.618 [2024-11-17 19:38:47.668686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.618 [2024-11-17 19:38:47.668724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.618 [2024-11-17 19:38:47.668740] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.618 [2024-11-17 19:38:47.671042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.618 [2024-11-17 19:38:47.680430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.618 [2024-11-17 19:38:47.680718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.680873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.680903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.680922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.618 [2024-11-17 19:38:47.681033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.618 [2024-11-17 19:38:47.681165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.618 [2024-11-17 19:38:47.681189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.618 [2024-11-17 19:38:47.681205] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.618 [2024-11-17 19:38:47.683491] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.618 [2024-11-17 19:38:47.693075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.618 [2024-11-17 19:38:47.693373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.693481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.693509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.693528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.618 [2024-11-17 19:38:47.693724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.618 [2024-11-17 19:38:47.693861] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.618 [2024-11-17 19:38:47.693883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.618 [2024-11-17 19:38:47.693898] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.618 [2024-11-17 19:38:47.696184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.618 [2024-11-17 19:38:47.705538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.618 [2024-11-17 19:38:47.705860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.705993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.706039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.706058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.618 [2024-11-17 19:38:47.706241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.618 [2024-11-17 19:38:47.706410] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.618 [2024-11-17 19:38:47.706434] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.618 [2024-11-17 19:38:47.706450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.618 [2024-11-17 19:38:47.708743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.618 [2024-11-17 19:38:47.717951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.618 [2024-11-17 19:38:47.718311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.718441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.718471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.718489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.618 [2024-11-17 19:38:47.718701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.618 [2024-11-17 19:38:47.718804] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.618 [2024-11-17 19:38:47.718825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.618 [2024-11-17 19:38:47.718839] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.618 [2024-11-17 19:38:47.721088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.618 [2024-11-17 19:38:47.730552] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.618 [2024-11-17 19:38:47.730857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.730969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.618 [2024-11-17 19:38:47.730994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.618 [2024-11-17 19:38:47.731010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.619 [2024-11-17 19:38:47.731198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.619 [2024-11-17 19:38:47.731392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.619 [2024-11-17 19:38:47.731416] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.619 [2024-11-17 19:38:47.731432] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.619 [2024-11-17 19:38:47.733791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.619 [2024-11-17 19:38:47.743119] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.619 [2024-11-17 19:38:47.743396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.743558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.743587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.619 [2024-11-17 19:38:47.743606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.619 [2024-11-17 19:38:47.743809] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.619 [2024-11-17 19:38:47.744049] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.619 [2024-11-17 19:38:47.744074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.619 [2024-11-17 19:38:47.744090] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.619 [2024-11-17 19:38:47.746342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.619 [2024-11-17 19:38:47.755850] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.619 [2024-11-17 19:38:47.756150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.756270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.756312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.619 [2024-11-17 19:38:47.756329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.619 [2024-11-17 19:38:47.756520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.619 [2024-11-17 19:38:47.756730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.619 [2024-11-17 19:38:47.756756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.619 [2024-11-17 19:38:47.756772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.619 [2024-11-17 19:38:47.759000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.619 [2024-11-17 19:38:47.768556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.619 [2024-11-17 19:38:47.768873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.769030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.769072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.619 [2024-11-17 19:38:47.769088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.619 [2024-11-17 19:38:47.769259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.619 [2024-11-17 19:38:47.769446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.619 [2024-11-17 19:38:47.769471] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.619 [2024-11-17 19:38:47.769487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.619 [2024-11-17 19:38:47.771866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.619 [2024-11-17 19:38:47.781064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.619 [2024-11-17 19:38:47.781349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.781471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.781501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.619 [2024-11-17 19:38:47.781519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.619 [2024-11-17 19:38:47.781752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.619 [2024-11-17 19:38:47.781904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.619 [2024-11-17 19:38:47.781929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.619 [2024-11-17 19:38:47.781945] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.619 [2024-11-17 19:38:47.784149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.619 [2024-11-17 19:38:47.793541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.619 [2024-11-17 19:38:47.793844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.793973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.794008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.619 [2024-11-17 19:38:47.794027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.619 [2024-11-17 19:38:47.794157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.619 [2024-11-17 19:38:47.794326] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.619 [2024-11-17 19:38:47.794350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.619 [2024-11-17 19:38:47.794366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.619 [2024-11-17 19:38:47.796872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.619 [2024-11-17 19:38:47.806210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.619 [2024-11-17 19:38:47.806541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.806668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.806710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.619 [2024-11-17 19:38:47.806729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.619 [2024-11-17 19:38:47.806858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.619 [2024-11-17 19:38:47.806993] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.619 [2024-11-17 19:38:47.807017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.619 [2024-11-17 19:38:47.807033] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.619 [2024-11-17 19:38:47.809416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.619 [2024-11-17 19:38:47.818820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.619 [2024-11-17 19:38:47.819153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.819280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.619 [2024-11-17 19:38:47.819309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.619 [2024-11-17 19:38:47.819328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.620 [2024-11-17 19:38:47.819510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.620 [2024-11-17 19:38:47.819693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.620 [2024-11-17 19:38:47.819718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.620 [2024-11-17 19:38:47.819734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.620 [2024-11-17 19:38:47.822101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.620 [2024-11-17 19:38:47.831522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.620 [2024-11-17 19:38:47.831839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.831988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.832017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.620 [2024-11-17 19:38:47.832041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.620 [2024-11-17 19:38:47.832206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.620 [2024-11-17 19:38:47.832394] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.620 [2024-11-17 19:38:47.832418] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.620 [2024-11-17 19:38:47.832434] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.620 [2024-11-17 19:38:47.834622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.620 [2024-11-17 19:38:47.844160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.620 [2024-11-17 19:38:47.844457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.844595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.844621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.620 [2024-11-17 19:38:47.844638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.620 [2024-11-17 19:38:47.844866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.620 [2024-11-17 19:38:47.844983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.620 [2024-11-17 19:38:47.845006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.620 [2024-11-17 19:38:47.845022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.620 [2024-11-17 19:38:47.847372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.620 [2024-11-17 19:38:47.856738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.620 [2024-11-17 19:38:47.857059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.857212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.857242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.620 [2024-11-17 19:38:47.857260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.620 [2024-11-17 19:38:47.857443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.620 [2024-11-17 19:38:47.857558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.620 [2024-11-17 19:38:47.857582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.620 [2024-11-17 19:38:47.857598] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.620 [2024-11-17 19:38:47.860013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.620 [2024-11-17 19:38:47.869287] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.620 [2024-11-17 19:38:47.869564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.869716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.869746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.620 [2024-11-17 19:38:47.869764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.620 [2024-11-17 19:38:47.869939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.620 [2024-11-17 19:38:47.870127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.620 [2024-11-17 19:38:47.870151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.620 [2024-11-17 19:38:47.870166] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.620 [2024-11-17 19:38:47.872661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.620 [2024-11-17 19:38:47.881898] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.620 [2024-11-17 19:38:47.882253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.882363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.620 [2024-11-17 19:38:47.882390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.620 [2024-11-17 19:38:47.882407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.620 [2024-11-17 19:38:47.882571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.620 [2024-11-17 19:38:47.882794] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.620 [2024-11-17 19:38:47.882820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.620 [2024-11-17 19:38:47.882836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.879 [2024-11-17 19:38:47.885396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.879 [2024-11-17 19:38:47.894297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.879 [2024-11-17 19:38:47.894558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-17 19:38:47.894718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-17 19:38:47.894751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-17 19:38:47.894770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.879 [2024-11-17 19:38:47.894955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.879 [2024-11-17 19:38:47.895143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.879 [2024-11-17 19:38:47.895167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.879 [2024-11-17 19:38:47.895182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.879 [2024-11-17 19:38:47.897410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.879 [2024-11-17 19:38:47.906852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.879 [2024-11-17 19:38:47.907167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-17 19:38:47.907324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-17 19:38:47.907354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-17 19:38:47.907373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.879 [2024-11-17 19:38:47.907539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.879 [2024-11-17 19:38:47.907763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.879 [2024-11-17 19:38:47.907788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.879 [2024-11-17 19:38:47.907804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.879 [2024-11-17 19:38:47.910318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.879 [2024-11-17 19:38:47.919338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.879 [2024-11-17 19:38:47.919695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-17 19:38:47.919823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-17 19:38:47.919850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-17 19:38:47.919867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.879 [2024-11-17 19:38:47.920081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.879 [2024-11-17 19:38:47.920250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.879 [2024-11-17 19:38:47.920274] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.879 [2024-11-17 19:38:47.920289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.879 [2024-11-17 19:38:47.922738] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.879 [2024-11-17 19:38:47.932097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.879 [2024-11-17 19:38:47.932352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-17 19:38:47.932506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-17 19:38:47.932536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-17 19:38:47.932554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.879 [2024-11-17 19:38:47.932731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.879 [2024-11-17 19:38:47.932901] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.879 [2024-11-17 19:38:47.932925] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.879 [2024-11-17 19:38:47.932940] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.879 [2024-11-17 19:38:47.935343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.879 [2024-11-17 19:38:47.944605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.879 [2024-11-17 19:38:47.944875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-17 19:38:47.945025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:47.945075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-17 19:38:47.945094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.880 [2024-11-17 19:38:47.945242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.880 [2024-11-17 19:38:47.945375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.880 [2024-11-17 19:38:47.945405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.880 [2024-11-17 19:38:47.945421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.880 [2024-11-17 19:38:47.947667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.880 [2024-11-17 19:38:47.957337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.880 [2024-11-17 19:38:47.957692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:47.957817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:47.957846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-17 19:38:47.957864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.880 [2024-11-17 19:38:47.958011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.880 [2024-11-17 19:38:47.958162] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.880 [2024-11-17 19:38:47.958186] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.880 [2024-11-17 19:38:47.958202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.880 [2024-11-17 19:38:47.960430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.880 [2024-11-17 19:38:47.969987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.880 [2024-11-17 19:38:47.970327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:47.970458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:47.970486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-17 19:38:47.970505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.880 [2024-11-17 19:38:47.970718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.880 [2024-11-17 19:38:47.970889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.880 [2024-11-17 19:38:47.970913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.880 [2024-11-17 19:38:47.970929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.880 [2024-11-17 19:38:47.973205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.880 [2024-11-17 19:38:47.982592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.880 [2024-11-17 19:38:47.982926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:47.983078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:47.983107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-17 19:38:47.983125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.880 [2024-11-17 19:38:47.983291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.880 [2024-11-17 19:38:47.983459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.880 [2024-11-17 19:38:47.983483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.880 [2024-11-17 19:38:47.983505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.880 [2024-11-17 19:38:47.985740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.880 [2024-11-17 19:38:47.995243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.880 [2024-11-17 19:38:47.995538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:47.995694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:47.995725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-17 19:38:47.995743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.880 [2024-11-17 19:38:47.995926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.880 [2024-11-17 19:38:47.996096] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.880 [2024-11-17 19:38:47.996120] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.880 [2024-11-17 19:38:47.996136] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.880 [2024-11-17 19:38:47.998593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.880 [2024-11-17 19:38:48.007801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.880 [2024-11-17 19:38:48.008111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:48.008277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:48.008304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-17 19:38:48.008321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.880 [2024-11-17 19:38:48.008526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.880 [2024-11-17 19:38:48.008746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.880 [2024-11-17 19:38:48.008772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.880 [2024-11-17 19:38:48.008788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.880 [2024-11-17 19:38:48.011187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.880 [2024-11-17 19:38:48.020596] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.880 [2024-11-17 19:38:48.020951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:48.021080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:48.021109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-17 19:38:48.021127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.880 [2024-11-17 19:38:48.021292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.880 [2024-11-17 19:38:48.021478] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.880 [2024-11-17 19:38:48.021503] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.880 [2024-11-17 19:38:48.021519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.880 [2024-11-17 19:38:48.023738] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.880 [2024-11-17 19:38:48.033071] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.880 [2024-11-17 19:38:48.033374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:48.033545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-17 19:38:48.033572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-17 19:38:48.033606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.881 [2024-11-17 19:38:48.033764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.881 [2024-11-17 19:38:48.033970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.881 [2024-11-17 19:38:48.033995] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.881 [2024-11-17 19:38:48.034011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.881 [2024-11-17 19:38:48.036252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.881 [2024-11-17 19:38:48.045524] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.881 [2024-11-17 19:38:48.045862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.045992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.046022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.881 [2024-11-17 19:38:48.046040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.881 [2024-11-17 19:38:48.046205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.881 [2024-11-17 19:38:48.046411] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.881 [2024-11-17 19:38:48.046435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.881 [2024-11-17 19:38:48.046451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.881 [2024-11-17 19:38:48.048934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.881 [2024-11-17 19:38:48.058091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.881 [2024-11-17 19:38:48.058507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.058750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.058813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.881 [2024-11-17 19:38:48.058831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.881 [2024-11-17 19:38:48.059032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.881 [2024-11-17 19:38:48.059202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.881 [2024-11-17 19:38:48.059226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.881 [2024-11-17 19:38:48.059242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.881 [2024-11-17 19:38:48.061593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.881 [2024-11-17 19:38:48.070487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.881 [2024-11-17 19:38:48.070773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.070907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.070939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.881 [2024-11-17 19:38:48.070958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.881 [2024-11-17 19:38:48.071141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.881 [2024-11-17 19:38:48.071329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.881 [2024-11-17 19:38:48.071353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.881 [2024-11-17 19:38:48.071369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.881 [2024-11-17 19:38:48.073724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.881 [2024-11-17 19:38:48.082922] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.881 [2024-11-17 19:38:48.083173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.083324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.083354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.881 [2024-11-17 19:38:48.083372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.881 [2024-11-17 19:38:48.083537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.881 [2024-11-17 19:38:48.083737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.881 [2024-11-17 19:38:48.083762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.881 [2024-11-17 19:38:48.083778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.881 [2024-11-17 19:38:48.086035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.881 [2024-11-17 19:38:48.095357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.881 [2024-11-17 19:38:48.095642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.095749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.095780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.881 [2024-11-17 19:38:48.095799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.881 [2024-11-17 19:38:48.095963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.881 [2024-11-17 19:38:48.096169] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.881 [2024-11-17 19:38:48.096193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.881 [2024-11-17 19:38:48.096209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.881 [2024-11-17 19:38:48.098433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.881 [2024-11-17 19:38:48.108058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.881 [2024-11-17 19:38:48.108338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.108463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.108493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.881 [2024-11-17 19:38:48.108525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.881 [2024-11-17 19:38:48.108707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.881 [2024-11-17 19:38:48.108853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.881 [2024-11-17 19:38:48.108878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.881 [2024-11-17 19:38:48.108893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.881 [2024-11-17 19:38:48.111259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.881 [2024-11-17 19:38:48.120559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.881 [2024-11-17 19:38:48.120904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.121054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.121084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.881 [2024-11-17 19:38:48.121102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.881 [2024-11-17 19:38:48.121268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.881 [2024-11-17 19:38:48.121473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.881 [2024-11-17 19:38:48.121496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.881 [2024-11-17 19:38:48.121512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.881 [2024-11-17 19:38:48.123816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.881 [2024-11-17 19:38:48.133101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.881 [2024-11-17 19:38:48.133409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.133561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.881 [2024-11-17 19:38:48.133590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:49.881 [2024-11-17 19:38:48.133608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:49.881 [2024-11-17 19:38:48.133750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:49.882 [2024-11-17 19:38:48.133939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.882 [2024-11-17 19:38:48.133963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.882 [2024-11-17 19:38:48.133979] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.882 [2024-11-17 19:38:48.136451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.141 [2024-11-17 19:38:48.145844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.141 [2024-11-17 19:38:48.146181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.141 [2024-11-17 19:38:48.146307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.141 [2024-11-17 19:38:48.146335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.141 [2024-11-17 19:38:48.146358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.141 [2024-11-17 19:38:48.146503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.141 [2024-11-17 19:38:48.146638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.141 [2024-11-17 19:38:48.146662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.141 [2024-11-17 19:38:48.146690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.141 [2024-11-17 19:38:48.148987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.141 [2024-11-17 19:38:48.158371] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.141 [2024-11-17 19:38:48.158690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.141 [2024-11-17 19:38:48.158852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.141 [2024-11-17 19:38:48.158883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.141 [2024-11-17 19:38:48.158902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.141 [2024-11-17 19:38:48.159104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.141 [2024-11-17 19:38:48.159273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.141 [2024-11-17 19:38:48.159298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.141 [2024-11-17 19:38:48.159313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.141 [2024-11-17 19:38:48.161693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.141 [2024-11-17 19:38:48.171301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.141 [2024-11-17 19:38:48.171730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.141 [2024-11-17 19:38:48.171872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.141 [2024-11-17 19:38:48.171901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.141 [2024-11-17 19:38:48.171921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.141 [2024-11-17 19:38:48.172104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.141 [2024-11-17 19:38:48.172256] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.141 [2024-11-17 19:38:48.172280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.141 [2024-11-17 19:38:48.172296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.141 [2024-11-17 19:38:48.174628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.141 [2024-11-17 19:38:48.183938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.141 [2024-11-17 19:38:48.184244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.141 [2024-11-17 19:38:48.184407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.141 [2024-11-17 19:38:48.184437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.141 [2024-11-17 19:38:48.184456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.141 [2024-11-17 19:38:48.184592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.141 [2024-11-17 19:38:48.184773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.141 [2024-11-17 19:38:48.184798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.141 [2024-11-17 19:38:48.184814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.142 [2024-11-17 19:38:48.187125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.142 [2024-11-17 19:38:48.196549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.142 [2024-11-17 19:38:48.196888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.197027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.197054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-17 19:38:48.197071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.142 [2024-11-17 19:38:48.197229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.142 [2024-11-17 19:38:48.197344] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.142 [2024-11-17 19:38:48.197367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.142 [2024-11-17 19:38:48.197383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.142 [2024-11-17 19:38:48.199814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.142 [2024-11-17 19:38:48.208985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.142 [2024-11-17 19:38:48.209293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.209392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.209421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-17 19:38:48.209439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.142 [2024-11-17 19:38:48.209622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.142 [2024-11-17 19:38:48.209819] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.142 [2024-11-17 19:38:48.209844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.142 [2024-11-17 19:38:48.209860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.142 [2024-11-17 19:38:48.212427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.142 [2024-11-17 19:38:48.221476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.142 [2024-11-17 19:38:48.221766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.221920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.221950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-17 19:38:48.221968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.142 [2024-11-17 19:38:48.222115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.142 [2024-11-17 19:38:48.222273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.142 [2024-11-17 19:38:48.222297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.142 [2024-11-17 19:38:48.222313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.142 [2024-11-17 19:38:48.224572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.142 [2024-11-17 19:38:48.234059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.142 [2024-11-17 19:38:48.234350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.234467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.234496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-17 19:38:48.234514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.142 [2024-11-17 19:38:48.234744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.142 [2024-11-17 19:38:48.234932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.142 [2024-11-17 19:38:48.234956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.142 [2024-11-17 19:38:48.234973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.142 [2024-11-17 19:38:48.237212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.142 [2024-11-17 19:38:48.246855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.142 [2024-11-17 19:38:48.247145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.247247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.247276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-17 19:38:48.247294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.142 [2024-11-17 19:38:48.247477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.142 [2024-11-17 19:38:48.247629] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.142 [2024-11-17 19:38:48.247652] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.142 [2024-11-17 19:38:48.247669] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.142 [2024-11-17 19:38:48.250104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.142 [2024-11-17 19:38:48.259152] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.142 [2024-11-17 19:38:48.259448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.259624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.259651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-17 19:38:48.259667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.142 [2024-11-17 19:38:48.259856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.142 [2024-11-17 19:38:48.260021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.142 [2024-11-17 19:38:48.260051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.142 [2024-11-17 19:38:48.260068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.142 [2024-11-17 19:38:48.262273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.142 [2024-11-17 19:38:48.271781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.142 [2024-11-17 19:38:48.272056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.272195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.272221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-17 19:38:48.272238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.142 [2024-11-17 19:38:48.272444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.142 [2024-11-17 19:38:48.272609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.142 [2024-11-17 19:38:48.272633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.142 [2024-11-17 19:38:48.272649] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.142 [2024-11-17 19:38:48.274881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.142 [2024-11-17 19:38:48.284616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.142 [2024-11-17 19:38:48.284965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.285106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-17 19:38:48.285159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-17 19:38:48.285178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.142 [2024-11-17 19:38:48.285307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.143 [2024-11-17 19:38:48.285476] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.143 [2024-11-17 19:38:48.285500] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.143 [2024-11-17 19:38:48.285516] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.143 [2024-11-17 19:38:48.287836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.143 [2024-11-17 19:38:48.297054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.143 [2024-11-17 19:38:48.297395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.297524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.297555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-17 19:38:48.297573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.143 [2024-11-17 19:38:48.297714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.143 [2024-11-17 19:38:48.297885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.143 [2024-11-17 19:38:48.297909] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.143 [2024-11-17 19:38:48.297931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.143 [2024-11-17 19:38:48.300371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.143 [2024-11-17 19:38:48.309434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.143 [2024-11-17 19:38:48.309727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.309885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.309915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-17 19:38:48.309934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.143 [2024-11-17 19:38:48.310082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.143 [2024-11-17 19:38:48.310287] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.143 [2024-11-17 19:38:48.310311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.143 [2024-11-17 19:38:48.310327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.143 [2024-11-17 19:38:48.312534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.143 [2024-11-17 19:38:48.321923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.143 [2024-11-17 19:38:48.322224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.322384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.322411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-17 19:38:48.322428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.143 [2024-11-17 19:38:48.322588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.143 [2024-11-17 19:38:48.322816] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.143 [2024-11-17 19:38:48.322842] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.143 [2024-11-17 19:38:48.322858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.143 [2024-11-17 19:38:48.325240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.143 [2024-11-17 19:38:48.334473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.143 [2024-11-17 19:38:48.334809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.334996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.335048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-17 19:38:48.335067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.143 [2024-11-17 19:38:48.335250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.143 [2024-11-17 19:38:48.335455] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.143 [2024-11-17 19:38:48.335479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.143 [2024-11-17 19:38:48.335494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.143 [2024-11-17 19:38:48.337896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.143 [2024-11-17 19:38:48.347006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.143 [2024-11-17 19:38:48.347369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.347467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.347495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-17 19:38:48.347512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.143 [2024-11-17 19:38:48.347688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.143 [2024-11-17 19:38:48.347841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.143 [2024-11-17 19:38:48.347865] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.143 [2024-11-17 19:38:48.347881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.143 [2024-11-17 19:38:48.350265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.143 [2024-11-17 19:38:48.359669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.143 [2024-11-17 19:38:48.360021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.360200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.360236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-17 19:38:48.360271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.143 [2024-11-17 19:38:48.360436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.143 [2024-11-17 19:38:48.360624] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.143 [2024-11-17 19:38:48.360648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.143 [2024-11-17 19:38:48.360663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.143 [2024-11-17 19:38:48.363164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.143 [2024-11-17 19:38:48.372503] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.143 [2024-11-17 19:38:48.372863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.372981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.373008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-17 19:38:48.373024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.143 [2024-11-17 19:38:48.373180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.143 [2024-11-17 19:38:48.373332] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.143 [2024-11-17 19:38:48.373356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.143 [2024-11-17 19:38:48.373371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.143 [2024-11-17 19:38:48.375692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.143 [2024-11-17 19:38:48.385060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.143 [2024-11-17 19:38:48.385355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.385484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-17 19:38:48.385515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-17 19:38:48.385533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.143 [2024-11-17 19:38:48.385692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.143 [2024-11-17 19:38:48.385863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.143 [2024-11-17 19:38:48.385887] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.143 [2024-11-17 19:38:48.385903] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.143 [2024-11-17 19:38:48.388124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.144 [2024-11-17 19:38:48.397524] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.144 [2024-11-17 19:38:48.397842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.144 [2024-11-17 19:38:48.397968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.144 [2024-11-17 19:38:48.397997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.144 [2024-11-17 19:38:48.398015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.144 [2024-11-17 19:38:48.398162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.144 [2024-11-17 19:38:48.398349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.144 [2024-11-17 19:38:48.398373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.144 [2024-11-17 19:38:48.398389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.144 [2024-11-17 19:38:48.400651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.404 [2024-11-17 19:38:48.410379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.404 [2024-11-17 19:38:48.410730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.410840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.410873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.404 [2024-11-17 19:38:48.410892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.404 [2024-11-17 19:38:48.411094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.404 [2024-11-17 19:38:48.411264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.404 [2024-11-17 19:38:48.411288] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.404 [2024-11-17 19:38:48.411304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.404 [2024-11-17 19:38:48.413636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.404 [2024-11-17 19:38:48.422917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.404 [2024-11-17 19:38:48.423252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.423378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.423409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.404 [2024-11-17 19:38:48.423427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.404 [2024-11-17 19:38:48.423594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.404 [2024-11-17 19:38:48.423776] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.404 [2024-11-17 19:38:48.423801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.404 [2024-11-17 19:38:48.423817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.404 [2024-11-17 19:38:48.426076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.404 [2024-11-17 19:38:48.435777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.404 [2024-11-17 19:38:48.436092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.436221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.436252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.404 [2024-11-17 19:38:48.436271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.404 [2024-11-17 19:38:48.436436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.404 [2024-11-17 19:38:48.436642] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.404 [2024-11-17 19:38:48.436665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.404 [2024-11-17 19:38:48.436693] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.404 [2024-11-17 19:38:48.439081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.404 [2024-11-17 19:38:48.448419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.404 [2024-11-17 19:38:48.448730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.448855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.448882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.404 [2024-11-17 19:38:48.448899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.404 [2024-11-17 19:38:48.449078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.404 [2024-11-17 19:38:48.449211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.404 [2024-11-17 19:38:48.449235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.404 [2024-11-17 19:38:48.449250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.404 [2024-11-17 19:38:48.451603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.404 [2024-11-17 19:38:48.461051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.404 [2024-11-17 19:38:48.461402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.461553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.461591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.404 [2024-11-17 19:38:48.461611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.404 [2024-11-17 19:38:48.461805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.404 [2024-11-17 19:38:48.461923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.404 [2024-11-17 19:38:48.461946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.404 [2024-11-17 19:38:48.461962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.404 [2024-11-17 19:38:48.464149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.404 [2024-11-17 19:38:48.473563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.404 [2024-11-17 19:38:48.473886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.474016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.474045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.404 [2024-11-17 19:38:48.474064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.404 [2024-11-17 19:38:48.474248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.404 [2024-11-17 19:38:48.474417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.404 [2024-11-17 19:38:48.474441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.404 [2024-11-17 19:38:48.474457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.404 [2024-11-17 19:38:48.476975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.404 [2024-11-17 19:38:48.486133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.404 [2024-11-17 19:38:48.486455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.486573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.486602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.404 [2024-11-17 19:38:48.486621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.404 [2024-11-17 19:38:48.486834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.404 [2024-11-17 19:38:48.487040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.404 [2024-11-17 19:38:48.487065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.404 [2024-11-17 19:38:48.487081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.404 [2024-11-17 19:38:48.489573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.404 [2024-11-17 19:38:48.498782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.404 [2024-11-17 19:38:48.499090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.499218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-17 19:38:48.499247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.404 [2024-11-17 19:38:48.499271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.404 [2024-11-17 19:38:48.499473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.404 [2024-11-17 19:38:48.499660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.404 [2024-11-17 19:38:48.499693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.404 [2024-11-17 19:38:48.499711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.404 [2024-11-17 19:38:48.501851] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.405 [2024-11-17 19:38:48.511341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.405 [2024-11-17 19:38:48.511689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.511798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.511825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-17 19:38:48.511841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.405 [2024-11-17 19:38:48.512020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.405 [2024-11-17 19:38:48.512229] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.405 [2024-11-17 19:38:48.512253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.405 [2024-11-17 19:38:48.512270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.405 [2024-11-17 19:38:48.514679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.405 [2024-11-17 19:38:48.524029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.405 [2024-11-17 19:38:48.524346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.524522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.524548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-17 19:38:48.524565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.405 [2024-11-17 19:38:48.524768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.405 [2024-11-17 19:38:48.524921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.405 [2024-11-17 19:38:48.524944] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.405 [2024-11-17 19:38:48.524967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.405 [2024-11-17 19:38:48.527139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.405 [2024-11-17 19:38:48.536521] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.405 [2024-11-17 19:38:48.536801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.536909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.536939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-17 19:38:48.536957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.405 [2024-11-17 19:38:48.537100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.405 [2024-11-17 19:38:48.537323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.405 [2024-11-17 19:38:48.537346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.405 [2024-11-17 19:38:48.537362] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.405 [2024-11-17 19:38:48.539623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.405 [2024-11-17 19:38:48.548990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.405 [2024-11-17 19:38:48.549292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.549411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.549437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-17 19:38:48.549454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.405 [2024-11-17 19:38:48.549648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.405 [2024-11-17 19:38:48.549836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.405 [2024-11-17 19:38:48.549859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.405 [2024-11-17 19:38:48.549874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.405 [2024-11-17 19:38:48.552035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.405 [2024-11-17 19:38:48.561132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.405 [2024-11-17 19:38:48.561426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.561531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.561576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-17 19:38:48.561593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.405 [2024-11-17 19:38:48.561747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.405 [2024-11-17 19:38:48.561908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.405 [2024-11-17 19:38:48.561929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.405 [2024-11-17 19:38:48.561943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.405 [2024-11-17 19:38:48.564366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.405 [2024-11-17 19:38:48.573644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.405 [2024-11-17 19:38:48.573923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.574028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.574054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-17 19:38:48.574071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.405 [2024-11-17 19:38:48.574220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.405 [2024-11-17 19:38:48.574361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.405 [2024-11-17 19:38:48.574383] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.405 [2024-11-17 19:38:48.574398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.405 [2024-11-17 19:38:48.576916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.405 [2024-11-17 19:38:48.586120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.405 [2024-11-17 19:38:48.586398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.586524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.586554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-17 19:38:48.586572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.405 [2024-11-17 19:38:48.586776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.405 [2024-11-17 19:38:48.586946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.405 [2024-11-17 19:38:48.586970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.405 [2024-11-17 19:38:48.586986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.405 [2024-11-17 19:38:48.589227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.405 [2024-11-17 19:38:48.598661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.405 [2024-11-17 19:38:48.598955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.599074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-17 19:38:48.599104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-17 19:38:48.599122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.405 [2024-11-17 19:38:48.599269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.405 [2024-11-17 19:38:48.599456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.405 [2024-11-17 19:38:48.599480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.405 [2024-11-17 19:38:48.599495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.405 [2024-11-17 19:38:48.601836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.406 [2024-11-17 19:38:48.611103] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.406 [2024-11-17 19:38:48.611450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.611611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.611641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-17 19:38:48.611659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.406 [2024-11-17 19:38:48.611868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.406 [2024-11-17 19:38:48.612021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.406 [2024-11-17 19:38:48.612051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.406 [2024-11-17 19:38:48.612067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.406 [2024-11-17 19:38:48.614521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.406 [2024-11-17 19:38:48.623630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.406 [2024-11-17 19:38:48.623910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.624036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.624065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-17 19:38:48.624083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.406 [2024-11-17 19:38:48.624230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.406 [2024-11-17 19:38:48.624363] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.406 [2024-11-17 19:38:48.624387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.406 [2024-11-17 19:38:48.624402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.406 [2024-11-17 19:38:48.626687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.406 [2024-11-17 19:38:48.636343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.406 [2024-11-17 19:38:48.636657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.636786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.636813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-17 19:38:48.636829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.406 [2024-11-17 19:38:48.636986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.406 [2024-11-17 19:38:48.637139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.406 [2024-11-17 19:38:48.637171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.406 [2024-11-17 19:38:48.637186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.406 [2024-11-17 19:38:48.639537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.406 [2024-11-17 19:38:48.648942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.406 [2024-11-17 19:38:48.649288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.649417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.649444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-17 19:38:48.649460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.406 [2024-11-17 19:38:48.649608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.406 [2024-11-17 19:38:48.649805] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.406 [2024-11-17 19:38:48.649827] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.406 [2024-11-17 19:38:48.649845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.406 [2024-11-17 19:38:48.652058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.406 [2024-11-17 19:38:48.661523] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.406 [2024-11-17 19:38:48.661890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.662034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-17 19:38:48.662063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-17 19:38:48.662081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.406 [2024-11-17 19:38:48.662282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.406 [2024-11-17 19:38:48.662434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.406 [2024-11-17 19:38:48.662458] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.406 [2024-11-17 19:38:48.662474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.406 [2024-11-17 19:38:48.664865] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.667 [2024-11-17 19:38:48.674056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.667 [2024-11-17 19:38:48.674343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.674484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.674525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.667 [2024-11-17 19:38:48.674544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.667 [2024-11-17 19:38:48.674656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.667 [2024-11-17 19:38:48.674818] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.667 [2024-11-17 19:38:48.674843] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.667 [2024-11-17 19:38:48.674860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.667 [2024-11-17 19:38:48.677158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.667 [2024-11-17 19:38:48.686711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.667 [2024-11-17 19:38:48.687023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.687170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.687197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.667 [2024-11-17 19:38:48.687214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.667 [2024-11-17 19:38:48.687330] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.667 [2024-11-17 19:38:48.687468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.667 [2024-11-17 19:38:48.687492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.667 [2024-11-17 19:38:48.687508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.667 [2024-11-17 19:38:48.689629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.667 [2024-11-17 19:38:48.699344] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.667 [2024-11-17 19:38:48.699668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.699772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.699802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.667 [2024-11-17 19:38:48.699821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.667 [2024-11-17 19:38:48.700004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.667 [2024-11-17 19:38:48.700119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.667 [2024-11-17 19:38:48.700144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.667 [2024-11-17 19:38:48.700160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.667 [2024-11-17 19:38:48.702298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.667 [2024-11-17 19:38:48.711991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.667 [2024-11-17 19:38:48.712292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.712380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.712407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.667 [2024-11-17 19:38:48.712424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.667 [2024-11-17 19:38:48.712589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.667 [2024-11-17 19:38:48.712751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.667 [2024-11-17 19:38:48.712774] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.667 [2024-11-17 19:38:48.712788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.667 [2024-11-17 19:38:48.715143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.667 [2024-11-17 19:38:48.724533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.667 [2024-11-17 19:38:48.724848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.724997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.725023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.667 [2024-11-17 19:38:48.725040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.667 [2024-11-17 19:38:48.725217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.667 [2024-11-17 19:38:48.725387] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.667 [2024-11-17 19:38:48.725411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.667 [2024-11-17 19:38:48.725427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.667 [2024-11-17 19:38:48.727764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.667 [2024-11-17 19:38:48.737201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.667 [2024-11-17 19:38:48.737493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.737621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.737650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.667 [2024-11-17 19:38:48.737668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.667 [2024-11-17 19:38:48.737832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.667 [2024-11-17 19:38:48.738013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.667 [2024-11-17 19:38:48.738039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.667 [2024-11-17 19:38:48.738055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.667 [2024-11-17 19:38:48.740666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.667 [2024-11-17 19:38:48.749837] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.667 [2024-11-17 19:38:48.750221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.750346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.750393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.667 [2024-11-17 19:38:48.750411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.667 [2024-11-17 19:38:48.750576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.667 [2024-11-17 19:38:48.750722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.667 [2024-11-17 19:38:48.750760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.667 [2024-11-17 19:38:48.750775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.667 [2024-11-17 19:38:48.752958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.667 [2024-11-17 19:38:48.762316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.667 [2024-11-17 19:38:48.762582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.762727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.667 [2024-11-17 19:38:48.762755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.668 [2024-11-17 19:38:48.762772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.668 [2024-11-17 19:38:48.762916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.668 [2024-11-17 19:38:48.763111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.668 [2024-11-17 19:38:48.763137] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.668 [2024-11-17 19:38:48.763153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.668 [2024-11-17 19:38:48.765605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.668 [2024-11-17 19:38:48.774763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.668 [2024-11-17 19:38:48.775155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.775310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.775337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.668 [2024-11-17 19:38:48.775353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.668 [2024-11-17 19:38:48.775532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.668 [2024-11-17 19:38:48.775695] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.668 [2024-11-17 19:38:48.775720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.668 [2024-11-17 19:38:48.775736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.668 [2024-11-17 19:38:48.777851] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.668 [2024-11-17 19:38:48.787442] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.668 [2024-11-17 19:38:48.787744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.787909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.787940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.668 [2024-11-17 19:38:48.787959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.668 [2024-11-17 19:38:48.788144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.668 [2024-11-17 19:38:48.788331] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.668 [2024-11-17 19:38:48.788355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.668 [2024-11-17 19:38:48.788371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.668 [2024-11-17 19:38:48.790776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.668 [2024-11-17 19:38:48.799918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.668 [2024-11-17 19:38:48.800256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.800395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.800423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.668 [2024-11-17 19:38:48.800442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.668 [2024-11-17 19:38:48.800552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.668 [2024-11-17 19:38:48.800733] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.668 [2024-11-17 19:38:48.800757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.668 [2024-11-17 19:38:48.800772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.668 [2024-11-17 19:38:48.803048] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.668 [2024-11-17 19:38:48.812565] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.668 [2024-11-17 19:38:48.812881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.813013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.813046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.668 [2024-11-17 19:38:48.813065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.668 [2024-11-17 19:38:48.813194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.668 [2024-11-17 19:38:48.813363] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.668 [2024-11-17 19:38:48.813386] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.668 [2024-11-17 19:38:48.813401] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.668 [2024-11-17 19:38:48.815792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1326680 Killed "${NVMF_APP[@]}" "$@" 00:29:50.668 19:38:48 -- host/bdevperf.sh@36 -- # tgt_init 00:29:50.668 19:38:48 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:50.668 19:38:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:50.668 19:38:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:50.668 19:38:48 -- common/autotest_common.sh@10 -- # set +x 00:29:50.668 19:38:48 -- nvmf/common.sh@469 -- # nvmfpid=1327799 00:29:50.668 19:38:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:50.668 19:38:48 -- nvmf/common.sh@470 -- # waitforlisten 1327799 00:29:50.668 19:38:48 -- common/autotest_common.sh@829 -- # '[' -z 1327799 ']' 00:29:50.668 19:38:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.668 19:38:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:50.668 19:38:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.668 19:38:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:50.668 19:38:48 -- common/autotest_common.sh@10 -- # set +x 00:29:50.668 [2024-11-17 19:38:48.825115] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.668 [2024-11-17 19:38:48.825403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.825497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.825528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.668 [2024-11-17 19:38:48.825546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.668 [2024-11-17 19:38:48.825705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.668 [2024-11-17 19:38:48.825860] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.668 [2024-11-17 19:38:48.825882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.668 [2024-11-17 19:38:48.825896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.668 [2024-11-17 19:38:48.828333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.668 [2024-11-17 19:38:48.837774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.668 [2024-11-17 19:38:48.838129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.838245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.838272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.668 [2024-11-17 19:38:48.838289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.668 [2024-11-17 19:38:48.838427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.668 [2024-11-17 19:38:48.838548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.668 [2024-11-17 19:38:48.838570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.668 [2024-11-17 19:38:48.838584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.668 [2024-11-17 19:38:48.840803] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.668 [2024-11-17 19:38:48.849754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.668 [2024-11-17 19:38:48.850097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.668 [2024-11-17 19:38:48.850241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.850267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-17 19:38:48.850284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.669 [2024-11-17 19:38:48.850465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.669 [2024-11-17 19:38:48.850643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.669 [2024-11-17 19:38:48.850664] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.669 [2024-11-17 19:38:48.850704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.669 [2024-11-17 19:38:48.852854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.669 [2024-11-17 19:38:48.862077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.669 [2024-11-17 19:38:48.862340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.862474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.862501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-17 19:38:48.862518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.669 [2024-11-17 19:38:48.862706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.669 [2024-11-17 19:38:48.862898] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.669 [2024-11-17 19:38:48.862919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.669 [2024-11-17 19:38:48.862932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.669 [2024-11-17 19:38:48.864549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:50.669 [2024-11-17 19:38:48.864603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.669 [2024-11-17 19:38:48.864847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.669 [2024-11-17 19:38:48.874381] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.669 [2024-11-17 19:38:48.874725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.874846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.874874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-17 19:38:48.874896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.669 [2024-11-17 19:38:48.875029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.669 [2024-11-17 19:38:48.875174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.669 [2024-11-17 19:38:48.875194] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.669 [2024-11-17 19:38:48.875206] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.669 [2024-11-17 19:38:48.877145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.669 [2024-11-17 19:38:48.886579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.669 [2024-11-17 19:38:48.886859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.887023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.887049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-17 19:38:48.887066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.669 [2024-11-17 19:38:48.887275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.669 [2024-11-17 19:38:48.887399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.669 [2024-11-17 19:38:48.887419] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.669 [2024-11-17 19:38:48.887431] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.669 [2024-11-17 19:38:48.889466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.669 [2024-11-17 19:38:48.898995] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.669 [2024-11-17 19:38:48.899304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.899487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.899514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-17 19:38:48.899530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.669 [2024-11-17 19:38:48.899698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.669 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.669 [2024-11-17 19:38:48.899854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.669 [2024-11-17 19:38:48.899874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.669 [2024-11-17 19:38:48.899887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.669 [2024-11-17 19:38:48.901797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.669 [2024-11-17 19:38:48.911232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.669 [2024-11-17 19:38:48.911527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.911702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.911747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-17 19:38:48.911769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.669 [2024-11-17 19:38:48.911952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.669 [2024-11-17 19:38:48.912121] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.669 [2024-11-17 19:38:48.912145] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.669 [2024-11-17 19:38:48.912161] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.669 [2024-11-17 19:38:48.914504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.669 [2024-11-17 19:38:48.923878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.669 [2024-11-17 19:38:48.924199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.924318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-17 19:38:48.924344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-17 19:38:48.924361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.669 [2024-11-17 19:38:48.924493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.669 [2024-11-17 19:38:48.924656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.669 [2024-11-17 19:38:48.924692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.669 [2024-11-17 19:38:48.924725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.669 [2024-11-17 19:38:48.926780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.930 [2024-11-17 19:38:48.934466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:50.930 [2024-11-17 19:38:48.936566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.930 [2024-11-17 19:38:48.936925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.937066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.937096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.930 [2024-11-17 19:38:48.937115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.930 [2024-11-17 19:38:48.937263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.930 [2024-11-17 19:38:48.937453] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.930 [2024-11-17 19:38:48.937475] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.930 [2024-11-17 19:38:48.937489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.930 [2024-11-17 19:38:48.939836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.930 [2024-11-17 19:38:48.949372] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.930 [2024-11-17 19:38:48.949787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.949931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.949959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.930 [2024-11-17 19:38:48.949979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.930 [2024-11-17 19:38:48.950161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.930 [2024-11-17 19:38:48.950346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.930 [2024-11-17 19:38:48.950371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.930 [2024-11-17 19:38:48.950389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.930 [2024-11-17 19:38:48.952748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.930 [2024-11-17 19:38:48.961835] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.930 [2024-11-17 19:38:48.962099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.962256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.962283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.930 [2024-11-17 19:38:48.962300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.930 [2024-11-17 19:38:48.962464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.930 [2024-11-17 19:38:48.962689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.930 [2024-11-17 19:38:48.962715] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.930 [2024-11-17 19:38:48.962746] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.930 [2024-11-17 19:38:48.964906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.930 [2024-11-17 19:38:48.974285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.930 [2024-11-17 19:38:48.974635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.974785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.974813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.930 [2024-11-17 19:38:48.974831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.930 [2024-11-17 19:38:48.974997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.930 [2024-11-17 19:38:48.975152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.930 [2024-11-17 19:38:48.975176] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.930 [2024-11-17 19:38:48.975193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.930 [2024-11-17 19:38:48.977565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.930 [2024-11-17 19:38:48.986872] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.930 [2024-11-17 19:38:48.987308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.987448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.987478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.930 [2024-11-17 19:38:48.987499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.930 [2024-11-17 19:38:48.987653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.930 [2024-11-17 19:38:48.987815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.930 [2024-11-17 19:38:48.987837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.930 [2024-11-17 19:38:48.987853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.930 [2024-11-17 19:38:48.990178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.930 [2024-11-17 19:38:48.999384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.930 [2024-11-17 19:38:48.999729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.999841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.930 [2024-11-17 19:38:48.999869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.930 [2024-11-17 19:38:48.999888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.930 [2024-11-17 19:38:49.000049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.930 [2024-11-17 19:38:49.000204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.930 [2024-11-17 19:38:49.000229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.930 [2024-11-17 19:38:49.000246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.930 [2024-11-17 19:38:49.002452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.930 [2024-11-17 19:38:49.011762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.931 [2024-11-17 19:38:49.012040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.012136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.012164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.931 [2024-11-17 19:38:49.012182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.931 [2024-11-17 19:38:49.012329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.931 [2024-11-17 19:38:49.012529] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.931 [2024-11-17 19:38:49.012555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.931 [2024-11-17 19:38:49.012573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.931 [2024-11-17 19:38:49.015015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.931 [2024-11-17 19:38:49.024163] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.931 [2024-11-17 19:38:49.024490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.024643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.024670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.931 [2024-11-17 19:38:49.024697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.931 [2024-11-17 19:38:49.024718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:50.931 [2024-11-17 19:38:49.024836] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.931 [2024-11-17 19:38:49.024847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.931 [2024-11-17 19:38:49.024863] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.931 [2024-11-17 19:38:49.024879] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.931 [2024-11-17 19:38:49.024934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.931 [2024-11-17 19:38:49.025015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.931 [2024-11-17 19:38:49.025038] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.931 [2024-11-17 19:38:49.025054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.931 [2024-11-17 19:38:49.024987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.931 [2024-11-17 19:38:49.024990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.931 [2024-11-17 19:38:49.027262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.931 [2024-11-17 19:38:49.036701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.931 [2024-11-17 19:38:49.037063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.037230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.037258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.931 [2024-11-17 19:38:49.037279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.931 [2024-11-17 19:38:49.037389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.931 [2024-11-17 19:38:49.037618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.931 [2024-11-17 19:38:49.037640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.931 [2024-11-17 19:38:49.037657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.931 [2024-11-17 19:38:49.039816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.931 [2024-11-17 19:38:49.049093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.931 [2024-11-17 19:38:49.049581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.049725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.049754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.931 [2024-11-17 19:38:49.049773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.931 [2024-11-17 19:38:49.049949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.931 [2024-11-17 19:38:49.050082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.931 [2024-11-17 19:38:49.050104] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.931 [2024-11-17 19:38:49.050121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.931 [2024-11-17 19:38:49.052190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.931 [2024-11-17 19:38:49.061456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.931 [2024-11-17 19:38:49.061886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.062043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.062070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.931 [2024-11-17 19:38:49.062090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.931 [2024-11-17 19:38:49.062231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.931 [2024-11-17 19:38:49.062397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.931 [2024-11-17 19:38:49.062419] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.931 [2024-11-17 19:38:49.062435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.931 [2024-11-17 19:38:49.064504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.931 [2024-11-17 19:38:49.073780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.931 [2024-11-17 19:38:49.074188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.074355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.074382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.931 [2024-11-17 19:38:49.074402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.931 [2024-11-17 19:38:49.074545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.931 [2024-11-17 19:38:49.074737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.931 [2024-11-17 19:38:49.074761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.931 [2024-11-17 19:38:49.074778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.931 [2024-11-17 19:38:49.076836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.931 [2024-11-17 19:38:49.086105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.931 [2024-11-17 19:38:49.086472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.086631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.086658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.931 [2024-11-17 19:38:49.086685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.931 [2024-11-17 19:38:49.086860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.931 [2024-11-17 19:38:49.087040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.931 [2024-11-17 19:38:49.087062] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.931 [2024-11-17 19:38:49.087077] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.931 [2024-11-17 19:38:49.089292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.931 [2024-11-17 19:38:49.098303] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.931 [2024-11-17 19:38:49.098685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.098804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.931 [2024-11-17 19:38:49.098833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.931 [2024-11-17 19:38:49.098863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.932 [2024-11-17 19:38:49.098990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.932 [2024-11-17 19:38:49.099157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.932 [2024-11-17 19:38:49.099179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.932 [2024-11-17 19:38:49.099195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.932 [2024-11-17 19:38:49.101217] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.932 [2024-11-17 19:38:49.110420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.932 [2024-11-17 19:38:49.110684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.110797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.110824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.932 [2024-11-17 19:38:49.110840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.932 [2024-11-17 19:38:49.111019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.932 [2024-11-17 19:38:49.111195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.932 [2024-11-17 19:38:49.111215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.932 [2024-11-17 19:38:49.111228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.932 [2024-11-17 19:38:49.113282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.932 [2024-11-17 19:38:49.122757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.932 [2024-11-17 19:38:49.123011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.123158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.123186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.932 [2024-11-17 19:38:49.123204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.932 [2024-11-17 19:38:49.123338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.932 [2024-11-17 19:38:49.123486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.932 [2024-11-17 19:38:49.123507] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.932 [2024-11-17 19:38:49.123520] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.932 [2024-11-17 19:38:49.125762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.932 [2024-11-17 19:38:49.135098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.932 [2024-11-17 19:38:49.135359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.135513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.135540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.932 [2024-11-17 19:38:49.135557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.932 [2024-11-17 19:38:49.135705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.932 [2024-11-17 19:38:49.135899] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.932 [2024-11-17 19:38:49.135921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.932 [2024-11-17 19:38:49.135935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.932 [2024-11-17 19:38:49.137991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.932 [2024-11-17 19:38:49.147291] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.932 [2024-11-17 19:38:49.147604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.147697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.147723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.932 [2024-11-17 19:38:49.147740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.932 [2024-11-17 19:38:49.147857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.932 [2024-11-17 19:38:49.148022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.932 [2024-11-17 19:38:49.148042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.932 [2024-11-17 19:38:49.148056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.932 [2024-11-17 19:38:49.150223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.932 [2024-11-17 19:38:49.159410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.932 [2024-11-17 19:38:49.159785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.159880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.159905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.932 [2024-11-17 19:38:49.159921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.932 [2024-11-17 19:38:49.160070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.932 [2024-11-17 19:38:49.160261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.932 [2024-11-17 19:38:49.160284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.932 [2024-11-17 19:38:49.160298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.932 [2024-11-17 19:38:49.162531] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.932 [2024-11-17 19:38:49.171419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.932 [2024-11-17 19:38:49.171650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.171796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.171824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.932 [2024-11-17 19:38:49.171841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.932 [2024-11-17 19:38:49.172039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.932 [2024-11-17 19:38:49.172189] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.932 [2024-11-17 19:38:49.172211] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.932 [2024-11-17 19:38:49.172225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.932 [2024-11-17 19:38:49.174220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.932 [2024-11-17 19:38:49.183601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.932 [2024-11-17 19:38:49.183893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.184025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-17 19:38:49.184054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:50.932 [2024-11-17 19:38:49.184070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:50.932 [2024-11-17 19:38:49.184205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:50.932 [2024-11-17 19:38:49.184383] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.932 [2024-11-17 19:38:49.184404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.932 [2024-11-17 19:38:49.184417] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.932 [2024-11-17 19:38:49.186447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.193 [2024-11-17 19:38:49.196184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.193 [2024-11-17 19:38:49.196506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.193 [2024-11-17 19:38:49.196630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.193 [2024-11-17 19:38:49.196659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.193 [2024-11-17 19:38:49.196685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.193 [2024-11-17 19:38:49.196839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.193 [2024-11-17 19:38:49.197002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.193 [2024-11-17 19:38:49.197024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.193 [2024-11-17 19:38:49.197038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.193 [2024-11-17 19:38:49.199174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.193 [2024-11-17 19:38:49.208443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.193 [2024-11-17 19:38:49.208731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.193 [2024-11-17 19:38:49.208828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.193 [2024-11-17 19:38:49.208855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.193 [2024-11-17 19:38:49.208871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.193 [2024-11-17 19:38:49.209054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.193 [2024-11-17 19:38:49.209276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.193 [2024-11-17 19:38:49.209303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.193 [2024-11-17 19:38:49.209317] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.193 [2024-11-17 19:38:49.211283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.193 [2024-11-17 19:38:49.220772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.193 [2024-11-17 19:38:49.221068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.193 [2024-11-17 19:38:49.221192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.193 [2024-11-17 19:38:49.221218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.193 [2024-11-17 19:38:49.221234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.221367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.221548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.221568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.221581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.223714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.233135] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.233502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.233649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.233682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.233701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.233834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.234016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.234052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.234065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.235898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.245291] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.245580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.245728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.245756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.245773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.245955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.246100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.246130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.246149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.248226] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.257671] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.257921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.258066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.258093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.258111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.258212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.258392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.258412] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.258425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.260567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.269788] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.270050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.270140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.270166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.270182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.270332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.270527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.270549] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.270562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.272570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.281994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.282298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.282442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.282470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.282486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.282636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.282842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.282866] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.282880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.284946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.294042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.294361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.294478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.294503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.294520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.294710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.294847] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.294869] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.294884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.297016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.306274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.306490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.306617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.306645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.306661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.306786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.306938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.306959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.306988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.309041] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.318515] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.318763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.318880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.318906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.318922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.319072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.319263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.319285] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.319299] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.321315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.330920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.331287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.331431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.331457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.331473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.331695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.331828] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.331849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.331863] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.333804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.343069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.343329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.343452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.343478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.343495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.343704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.343855] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.343877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.343891] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.346003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.355351] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.355613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.355708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.355733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.355750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.355883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.356051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.356072] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.356086] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.358263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.367757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.368078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.368170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.368196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.368212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.368378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.368584] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.368607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.368621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.370793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.380045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.380303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.380399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.380424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.380440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.380590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.380763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.380784] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.380798] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.382735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.392359] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.392683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.392805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.194 [2024-11-17 19:38:49.392832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.194 [2024-11-17 19:38:49.392849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.194 [2024-11-17 19:38:49.393013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.194 [2024-11-17 19:38:49.393207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.194 [2024-11-17 19:38:49.393229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.194 [2024-11-17 19:38:49.393243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.194 [2024-11-17 19:38:49.395310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.194 [2024-11-17 19:38:49.404629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.194 [2024-11-17 19:38:49.404876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.404995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.405027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.195 [2024-11-17 19:38:49.405045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.195 [2024-11-17 19:38:49.405210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.195 [2024-11-17 19:38:49.405402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.195 [2024-11-17 19:38:49.405423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.195 [2024-11-17 19:38:49.405437] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.195 [2024-11-17 19:38:49.407357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.195 [2024-11-17 19:38:49.416936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.195 [2024-11-17 19:38:49.417230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.417342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.417367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.195 [2024-11-17 19:38:49.417384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.195 [2024-11-17 19:38:49.417549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.195 [2024-11-17 19:38:49.417732] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.195 [2024-11-17 19:38:49.417753] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.195 [2024-11-17 19:38:49.417767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.195 [2024-11-17 19:38:49.419704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.195 [2024-11-17 19:38:49.429285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.195 [2024-11-17 19:38:49.429572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.429718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.429747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.195 [2024-11-17 19:38:49.429763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.195 [2024-11-17 19:38:49.429897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.195 [2024-11-17 19:38:49.430060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.195 [2024-11-17 19:38:49.430081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.195 [2024-11-17 19:38:49.430094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.195 [2024-11-17 19:38:49.432103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.195 [2024-11-17 19:38:49.441545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.195 [2024-11-17 19:38:49.441795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.441901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.441928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.195 [2024-11-17 19:38:49.441951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.195 [2024-11-17 19:38:49.442117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.195 [2024-11-17 19:38:49.442308] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.195 [2024-11-17 19:38:49.442330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.195 [2024-11-17 19:38:49.442344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.195 [2024-11-17 19:38:49.444356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.195 [2024-11-17 19:38:49.454087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.195 [2024-11-17 19:38:49.454376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.454491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.195 [2024-11-17 19:38:49.454518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.195 [2024-11-17 19:38:49.454536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.195 [2024-11-17 19:38:49.454710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.195 [2024-11-17 19:38:49.454864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.195 [2024-11-17 19:38:49.454885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.195 [2024-11-17 19:38:49.454900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.195 [2024-11-17 19:38:49.457091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.453 [2024-11-17 19:38:49.466221] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.453 [2024-11-17 19:38:49.466470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.466587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.466614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.453 [2024-11-17 19:38:49.466632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.453 [2024-11-17 19:38:49.466774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.453 [2024-11-17 19:38:49.466944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.453 [2024-11-17 19:38:49.466967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.453 [2024-11-17 19:38:49.466997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.453 [2024-11-17 19:38:49.469121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.453 [2024-11-17 19:38:49.478468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.453 [2024-11-17 19:38:49.478806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.478925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.478951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.453 [2024-11-17 19:38:49.478968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.453 [2024-11-17 19:38:49.479170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.453 [2024-11-17 19:38:49.479315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.453 [2024-11-17 19:38:49.479337] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.453 [2024-11-17 19:38:49.479350] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.453 [2024-11-17 19:38:49.481429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.453 [2024-11-17 19:38:49.490859] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.453 [2024-11-17 19:38:49.491160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.491306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.491334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.453 [2024-11-17 19:38:49.491351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.453 [2024-11-17 19:38:49.491500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.453 [2024-11-17 19:38:49.491664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.453 [2024-11-17 19:38:49.491719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.453 [2024-11-17 19:38:49.491735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.453 [2024-11-17 19:38:49.493702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.453 [2024-11-17 19:38:49.503103] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.453 [2024-11-17 19:38:49.503399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.503519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.503545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.453 [2024-11-17 19:38:49.503561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.453 [2024-11-17 19:38:49.503768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.453 [2024-11-17 19:38:49.503932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.453 [2024-11-17 19:38:49.503954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.453 [2024-11-17 19:38:49.503983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.453 [2024-11-17 19:38:49.505997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.453 [2024-11-17 19:38:49.515303] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.453 [2024-11-17 19:38:49.515542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.515637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.515662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.453 [2024-11-17 19:38:49.515686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.453 [2024-11-17 19:38:49.515838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.453 [2024-11-17 19:38:49.516021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.453 [2024-11-17 19:38:49.516042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.453 [2024-11-17 19:38:49.516055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.453 [2024-11-17 19:38:49.518179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.453 [2024-11-17 19:38:49.527609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.453 [2024-11-17 19:38:49.527938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.528030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.528056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.453 [2024-11-17 19:38:49.528072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.453 [2024-11-17 19:38:49.528237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.453 [2024-11-17 19:38:49.528369] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.453 [2024-11-17 19:38:49.528389] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.453 [2024-11-17 19:38:49.528402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.453 [2024-11-17 19:38:49.530316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.453 [2024-11-17 19:38:49.539901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.453 [2024-11-17 19:38:49.540255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.540373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.453 [2024-11-17 19:38:49.540398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.453 [2024-11-17 19:38:49.540414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.453 [2024-11-17 19:38:49.540563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.454 [2024-11-17 19:38:49.540722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.454 [2024-11-17 19:38:49.540743] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.454 [2024-11-17 19:38:49.540757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.454 [2024-11-17 19:38:49.542861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.454 [2024-11-17 19:38:49.552049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.454 [2024-11-17 19:38:49.552411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.552533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.552559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.454 [2024-11-17 19:38:49.552575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.454 [2024-11-17 19:38:49.552752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.454 [2024-11-17 19:38:49.552917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.454 [2024-11-17 19:38:49.552944] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.454 [2024-11-17 19:38:49.552959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.454 [2024-11-17 19:38:49.554936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.454 [2024-11-17 19:38:49.564292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.454 [2024-11-17 19:38:49.564538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.564684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.564712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.454 [2024-11-17 19:38:49.564728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.454 [2024-11-17 19:38:49.564894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.454 [2024-11-17 19:38:49.565087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.454 [2024-11-17 19:38:49.565109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.454 [2024-11-17 19:38:49.565122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.454 [2024-11-17 19:38:49.567147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.454 [2024-11-17 19:38:49.576538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.454 [2024-11-17 19:38:49.576832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.576955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.576983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.454 [2024-11-17 19:38:49.576999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.454 [2024-11-17 19:38:49.577132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.454 [2024-11-17 19:38:49.577309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.454 [2024-11-17 19:38:49.577331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.454 [2024-11-17 19:38:49.577345] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.454 [2024-11-17 19:38:49.579382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.454 [2024-11-17 19:38:49.588914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.454 [2024-11-17 19:38:49.589190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.589277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.589303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.454 [2024-11-17 19:38:49.589320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.454 [2024-11-17 19:38:49.589469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.454 [2024-11-17 19:38:49.589660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.454 [2024-11-17 19:38:49.589706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.454 [2024-11-17 19:38:49.589726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.454 [2024-11-17 19:38:49.591909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.454 [2024-11-17 19:38:49.601182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.454 [2024-11-17 19:38:49.601510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.601661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.601695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.454 [2024-11-17 19:38:49.601713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.454 [2024-11-17 19:38:49.601845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.454 [2024-11-17 19:38:49.601981] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.454 [2024-11-17 19:38:49.602003] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.454 [2024-11-17 19:38:49.602032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.454 [2024-11-17 19:38:49.604013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.454 [2024-11-17 19:38:49.613541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.454 [2024-11-17 19:38:49.613846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.613966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.613993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.454 [2024-11-17 19:38:49.614009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.454 [2024-11-17 19:38:49.614125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.454 [2024-11-17 19:38:49.614306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.454 [2024-11-17 19:38:49.614327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.454 [2024-11-17 19:38:49.614341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.454 [2024-11-17 19:38:49.616336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.454 [2024-11-17 19:38:49.625936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.454 [2024-11-17 19:38:49.626260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.626400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.626427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.454 [2024-11-17 19:38:49.626444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.454 [2024-11-17 19:38:49.626576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.454 [2024-11-17 19:38:49.626739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.454 [2024-11-17 19:38:49.626762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.454 [2024-11-17 19:38:49.626776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.454 [2024-11-17 19:38:49.628908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.454 [2024-11-17 19:38:49.638272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.454 [2024-11-17 19:38:49.638630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.638724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.638750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.454 [2024-11-17 19:38:49.638766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.454 [2024-11-17 19:38:49.638914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.454 [2024-11-17 19:38:49.639092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.454 [2024-11-17 19:38:49.639113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.454 [2024-11-17 19:38:49.639126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.454 [2024-11-17 19:38:49.641016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.454 [2024-11-17 19:38:49.650576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.454 [2024-11-17 19:38:49.650845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.650964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.454 [2024-11-17 19:38:49.650990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.455 [2024-11-17 19:38:49.651007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.455 [2024-11-17 19:38:49.651156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.455 [2024-11-17 19:38:49.651321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.455 [2024-11-17 19:38:49.651341] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.455 [2024-11-17 19:38:49.651354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.455 [2024-11-17 19:38:49.653586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.455 [2024-11-17 19:38:49.662882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.455 [2024-11-17 19:38:49.663184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.663303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.663331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.455 [2024-11-17 19:38:49.663348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.455 [2024-11-17 19:38:49.663497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.455 [2024-11-17 19:38:49.663647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.455 [2024-11-17 19:38:49.663701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.455 [2024-11-17 19:38:49.663719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.455 [2024-11-17 19:38:49.665946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.455 [2024-11-17 19:38:49.675025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.455 [2024-11-17 19:38:49.675347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.675429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.675456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.455 [2024-11-17 19:38:49.675472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.455 [2024-11-17 19:38:49.675621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.455 [2024-11-17 19:38:49.675798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.455 [2024-11-17 19:38:49.675821] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.455 [2024-11-17 19:38:49.675835] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.455 [2024-11-17 19:38:49.677861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.455 [2024-11-17 19:38:49.687412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.455 [2024-11-17 19:38:49.687706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.687792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.687819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.455 [2024-11-17 19:38:49.687835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.455 [2024-11-17 19:38:49.688017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.455 [2024-11-17 19:38:49.688179] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.455 [2024-11-17 19:38:49.688200] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.455 [2024-11-17 19:38:49.688213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.455 [2024-11-17 19:38:49.690129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.455 [2024-11-17 19:38:49.699527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.455 [2024-11-17 19:38:49.699843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.699957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.699992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.455 [2024-11-17 19:38:49.700009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.455 [2024-11-17 19:38:49.700157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.455 [2024-11-17 19:38:49.700365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.455 [2024-11-17 19:38:49.700386] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.455 [2024-11-17 19:38:49.700399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.455 [2024-11-17 19:38:49.702330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.455 [2024-11-17 19:38:49.711722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.455 [2024-11-17 19:38:49.711998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.712121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.455 [2024-11-17 19:38:49.712147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.455 [2024-11-17 19:38:49.712163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.455 [2024-11-17 19:38:49.712295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.455 [2024-11-17 19:38:49.712521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.455 [2024-11-17 19:38:49.712542] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.455 [2024-11-17 19:38:49.712555] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.455 [2024-11-17 19:38:49.714636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.715 [2024-11-17 19:38:49.723990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.715 [2024-11-17 19:38:49.724277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.724372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.724412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.715 [2024-11-17 19:38:49.724443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.715 [2024-11-17 19:38:49.724592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.715 [2024-11-17 19:38:49.724773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.715 [2024-11-17 19:38:49.724797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.715 [2024-11-17 19:38:49.724811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.715 [2024-11-17 19:38:49.726951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.715 [2024-11-17 19:38:49.736235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.715 [2024-11-17 19:38:49.736584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.736709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.736737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.715 [2024-11-17 19:38:49.736754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.715 [2024-11-17 19:38:49.736871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.715 [2024-11-17 19:38:49.737068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.715 [2024-11-17 19:38:49.737088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.715 [2024-11-17 19:38:49.737101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.715 [2024-11-17 19:38:49.739083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.715 [2024-11-17 19:38:49.748419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.715 [2024-11-17 19:38:49.748760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.748846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.748872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.715 [2024-11-17 19:38:49.748896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.715 [2024-11-17 19:38:49.749062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.715 [2024-11-17 19:38:49.749224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.715 [2024-11-17 19:38:49.749245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.715 [2024-11-17 19:38:49.749259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.715 [2024-11-17 19:38:49.751208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.715 [2024-11-17 19:38:49.760803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.715 [2024-11-17 19:38:49.761053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.761167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.761194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.715 [2024-11-17 19:38:49.761210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.715 [2024-11-17 19:38:49.761359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.715 [2024-11-17 19:38:49.761538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.715 [2024-11-17 19:38:49.761559] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.715 [2024-11-17 19:38:49.761573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.715 [2024-11-17 19:38:49.763622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.715 [2024-11-17 19:38:49.773176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.715 [2024-11-17 19:38:49.773470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.773593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.773620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.715 [2024-11-17 19:38:49.773636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.715 [2024-11-17 19:38:49.773824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.715 [2024-11-17 19:38:49.773997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.715 [2024-11-17 19:38:49.774019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.715 [2024-11-17 19:38:49.774034] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.715 [2024-11-17 19:38:49.776253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.715 [2024-11-17 19:38:49.785330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.715 [2024-11-17 19:38:49.785601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.785746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.785773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.715 [2024-11-17 19:38:49.785789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.715 [2024-11-17 19:38:49.785943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.715 [2024-11-17 19:38:49.786108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.715 [2024-11-17 19:38:49.786129] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.715 [2024-11-17 19:38:49.786142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.715 [2024-11-17 19:38:49.788181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.715 [2024-11-17 19:38:49.797557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.715 [2024-11-17 19:38:49.797840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.797927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.797952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.715 [2024-11-17 19:38:49.797974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.715 [2024-11-17 19:38:49.798122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.715 [2024-11-17 19:38:49.798303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.715 [2024-11-17 19:38:49.798324] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.715 [2024-11-17 19:38:49.798338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.715 [2024-11-17 19:38:49.800423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.715 [2024-11-17 19:38:49.809981] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.715 [2024-11-17 19:38:49.810252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.810347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.810373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.715 [2024-11-17 19:38:49.810390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.715 [2024-11-17 19:38:49.810554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.715 [2024-11-17 19:38:49.810759] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.715 [2024-11-17 19:38:49.810781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.715 [2024-11-17 19:38:49.810795] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.715 [2024-11-17 19:38:49.812843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.715 [2024-11-17 19:38:49.822362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.715 [2024-11-17 19:38:49.822601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.822702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.715 [2024-11-17 19:38:49.822729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.716 [2024-11-17 19:38:49.822745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.716 [2024-11-17 19:38:49.822845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.716 [2024-11-17 19:38:49.823053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.716 [2024-11-17 19:38:49.823077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.716 [2024-11-17 19:38:49.823090] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.716 [2024-11-17 19:38:49.825252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.716 [2024-11-17 19:38:49.834749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.716 [2024-11-17 19:38:49.835022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.835166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.835191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.716 [2024-11-17 19:38:49.835207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.716 [2024-11-17 19:38:49.835354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.716 [2024-11-17 19:38:49.835549] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.716 [2024-11-17 19:38:49.835569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.716 [2024-11-17 19:38:49.835582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.716 [2024-11-17 19:38:49.837805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.716 [2024-11-17 19:38:49.847004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.716 [2024-11-17 19:38:49.847292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.847381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.847405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.716 [2024-11-17 19:38:49.847421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.716 [2024-11-17 19:38:49.847553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.716 [2024-11-17 19:38:49.847671] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.716 [2024-11-17 19:38:49.847702] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.716 [2024-11-17 19:38:49.847715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.716 [2024-11-17 19:38:49.849789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.716 [2024-11-17 19:38:49.859349] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.716 [2024-11-17 19:38:49.859618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.859725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.859753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.716 [2024-11-17 19:38:49.859769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.716 [2024-11-17 19:38:49.859852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.716 [2024-11-17 19:38:49.859987] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.716 [2024-11-17 19:38:49.860013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.716 [2024-11-17 19:38:49.860027] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.716 [2024-11-17 19:38:49.861999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.716 [2024-11-17 19:38:49.871621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.716 [2024-11-17 19:38:49.871914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.872036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.872061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.716 [2024-11-17 19:38:49.872077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.716 [2024-11-17 19:38:49.872192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.716 [2024-11-17 19:38:49.872374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.716 [2024-11-17 19:38:49.872410] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.716 [2024-11-17 19:38:49.872425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.716 [2024-11-17 19:38:49.874627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.716 [2024-11-17 19:38:49.883815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.716 [2024-11-17 19:38:49.884062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.884157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.884183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.716 [2024-11-17 19:38:49.884199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.716 [2024-11-17 19:38:49.884315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.716 [2024-11-17 19:38:49.884435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.716 [2024-11-17 19:38:49.884456] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.716 [2024-11-17 19:38:49.884469] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.716 [2024-11-17 19:38:49.886535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.716 [2024-11-17 19:38:49.896250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.716 [2024-11-17 19:38:49.896486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.896600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.896626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.716 [2024-11-17 19:38:49.896641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.716 [2024-11-17 19:38:49.896782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.716 [2024-11-17 19:38:49.896951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.716 [2024-11-17 19:38:49.896972] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.716 [2024-11-17 19:38:49.896991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.716 19:38:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:51.716 19:38:49 -- common/autotest_common.sh@862 -- # return 0 00:29:51.716 19:38:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:51.716 19:38:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:51.716 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:29:51.716 [2024-11-17 19:38:49.899246] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.716 [2024-11-17 19:38:49.908622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.716 [2024-11-17 19:38:49.908924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.909079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.909104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.716 [2024-11-17 19:38:49.909120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.716 [2024-11-17 19:38:49.909236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.716 [2024-11-17 19:38:49.909439] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.716 [2024-11-17 19:38:49.909460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.716 [2024-11-17 19:38:49.909472] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.716 [2024-11-17 19:38:49.911491] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.716 19:38:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.716 19:38:49 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.716 19:38:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.716 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:29:51.716 [2024-11-17 19:38:49.917617] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.716 [2024-11-17 19:38:49.921135] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.716 [2024-11-17 19:38:49.921421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.921544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.716 [2024-11-17 19:38:49.921569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.716 [2024-11-17 19:38:49.921585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.716 [2024-11-17 19:38:49.921742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.716 [2024-11-17 19:38:49.921878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.716 [2024-11-17 19:38:49.921900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.716 [2024-11-17 19:38:49.921913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.716 19:38:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.717 19:38:49 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:51.717 19:38:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.717 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:29:51.717 [2024-11-17 19:38:49.924008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.717 [2024-11-17 19:38:49.933387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.717 [2024-11-17 19:38:49.933777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.717 [2024-11-17 19:38:49.933868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.717 [2024-11-17 19:38:49.933893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.717 [2024-11-17 19:38:49.933909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.717 [2024-11-17 19:38:49.934074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.717 [2024-11-17 19:38:49.934223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.717 [2024-11-17 19:38:49.934243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.717 [2024-11-17 19:38:49.934255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.717 [2024-11-17 19:38:49.936319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.717 [2024-11-17 19:38:49.945714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.717 [2024-11-17 19:38:49.946009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.717 [2024-11-17 19:38:49.946120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.717 [2024-11-17 19:38:49.946146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.717 [2024-11-17 19:38:49.946162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.717 [2024-11-17 19:38:49.946344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.717 [2024-11-17 19:38:49.946509] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.717 [2024-11-17 19:38:49.946530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.717 [2024-11-17 19:38:49.946543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.717 [2024-11-17 19:38:49.948609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.717 [2024-11-17 19:38:49.957855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.717 [2024-11-17 19:38:49.958320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.717 [2024-11-17 19:38:49.958457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.717 [2024-11-17 19:38:49.958484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.717 [2024-11-17 19:38:49.958502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.717 [2024-11-17 19:38:49.958659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.717 [2024-11-17 19:38:49.958824] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.717 [2024-11-17 19:38:49.958845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.717 [2024-11-17 19:38:49.958861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.717 [2024-11-17 19:38:49.961060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.717 Malloc0 00:29:51.717 19:38:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.717 19:38:49 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.717 19:38:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.717 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:29:51.717 19:38:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.717 19:38:49 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.717 19:38:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.717 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:29:51.717 [2024-11-17 19:38:49.970341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.717 [2024-11-17 19:38:49.970568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.717 [2024-11-17 19:38:49.970679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.717 [2024-11-17 19:38:49.970707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbf1d0 with addr=10.0.0.2, port=4420 00:29:51.717 [2024-11-17 19:38:49.970723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf1d0 is same with the state(5) to be set 00:29:51.717 [2024-11-17 19:38:49.970839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbf1d0 (9): Bad file descriptor 00:29:51.717 [2024-11-17 19:38:49.970990] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.717 [2024-11-17 19:38:49.971012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.717 [2024-11-17 19:38:49.971025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.717 [2024-11-17 19:38:49.973223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.717 19:38:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.717 19:38:49 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.717 19:38:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.717 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:29:51.975 [2024-11-17 19:38:49.981742] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.975 [2024-11-17 19:38:49.982855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.975 19:38:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.975 19:38:49 -- host/bdevperf.sh@38 -- # wait 1326991 00:29:51.975 [2024-11-17 19:38:50.016866] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:00.158 00:30:00.158 Latency(us) 00:30:00.158 [2024-11-17T18:38:58.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.158 [2024-11-17T18:38:58.425Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:00.158 Verification LBA range: start 0x0 length 0x4000 00:30:00.158 Nvme1n1 : 15.01 9311.19 36.37 16150.45 0.00 5012.36 855.61 20680.25 00:30:00.158 [2024-11-17T18:38:58.425Z] =================================================================================================================== 00:30:00.158 [2024-11-17T18:38:58.425Z] Total : 9311.19 36.37 16150.45 0.00 5012.36 855.61 20680.25 00:30:00.416 19:38:58 -- host/bdevperf.sh@39 -- # sync 00:30:00.416 19:38:58 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:00.416 19:38:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.416 19:38:58 -- common/autotest_common.sh@10 -- # set +x 00:30:00.416 19:38:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.416 19:38:58 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:00.416 19:38:58 -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:00.416 19:38:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:00.416 19:38:58 -- nvmf/common.sh@116 -- # sync 00:30:00.416 19:38:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:00.416 19:38:58 -- nvmf/common.sh@119 -- # set +e 00:30:00.416 19:38:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:00.416 19:38:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:00.416 rmmod nvme_tcp 00:30:00.416 rmmod nvme_fabrics 00:30:00.416 rmmod nvme_keyring 00:30:00.416 19:38:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:00.416 19:38:58 -- nvmf/common.sh@123 -- # set -e 00:30:00.416 19:38:58 -- nvmf/common.sh@124 -- # return 0 00:30:00.416 19:38:58 -- nvmf/common.sh@477 -- # '[' -n 1327799 ']' 00:30:00.674 19:38:58 -- nvmf/common.sh@478 -- # killprocess 1327799 00:30:00.674 19:38:58 -- common/autotest_common.sh@936 -- # '[' -z 1327799 ']' 00:30:00.674 19:38:58 -- common/autotest_common.sh@940 -- # kill -0 1327799 00:30:00.674 19:38:58 -- common/autotest_common.sh@941 -- # uname 00:30:00.674 19:38:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:00.674 19:38:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1327799 00:30:00.674 19:38:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:00.674 19:38:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:00.674 19:38:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1327799' 00:30:00.675 killing process with pid 1327799 00:30:00.675 19:38:58 -- common/autotest_common.sh@955 -- # kill 1327799 00:30:00.675 19:38:58 -- common/autotest_common.sh@960 -- # wait 1327799 00:30:00.933 19:38:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:00.933 19:38:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:00.933 19:38:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:00.933 19:38:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:00.933 19:38:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:00.933 19:38:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.933 19:38:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.933 19:38:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.833 19:39:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:02.833 00:30:02.833 real 0m23.334s 00:30:02.833 user 1m3.204s 00:30:02.833 sys 0m4.260s 00:30:02.833 19:39:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:02.833 19:39:01 -- common/autotest_common.sh@10 -- # set +x 00:30:02.833 ************************************ 00:30:02.833 END TEST nvmf_bdevperf 00:30:02.833 ************************************ 00:30:02.833 19:39:01 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:02.833 19:39:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:02.833 19:39:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:02.833 19:39:01 -- common/autotest_common.sh@10 -- # set +x 00:30:02.833 ************************************ 00:30:02.833 START TEST nvmf_target_disconnect 00:30:02.833 ************************************ 00:30:02.833 19:39:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:03.092 * Looking for test storage... 00:30:03.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.092 19:39:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:03.092 19:39:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:03.092 19:39:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:03.092 19:39:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:03.092 19:39:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:03.092 19:39:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:03.092 19:39:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:03.092 19:39:01 -- scripts/common.sh@335 -- # IFS=.-: 00:30:03.092 19:39:01 -- scripts/common.sh@335 -- # read -ra ver1 00:30:03.092 19:39:01 -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.092 19:39:01 -- scripts/common.sh@336 -- # read -ra ver2 00:30:03.092 19:39:01 -- scripts/common.sh@337 -- # local 'op=<' 00:30:03.092 19:39:01 -- scripts/common.sh@339 -- # ver1_l=2 00:30:03.092 19:39:01 -- scripts/common.sh@340 -- # ver2_l=1 00:30:03.092 19:39:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:03.092 19:39:01 -- scripts/common.sh@343 -- # case "$op" in 00:30:03.092 19:39:01 -- scripts/common.sh@344 -- # : 1 00:30:03.092 19:39:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:03.092 19:39:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.092 19:39:01 -- scripts/common.sh@364 -- # decimal 1 00:30:03.092 19:39:01 -- scripts/common.sh@352 -- # local d=1 00:30:03.092 19:39:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.092 19:39:01 -- scripts/common.sh@354 -- # echo 1 00:30:03.092 19:39:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:03.092 19:39:01 -- scripts/common.sh@365 -- # decimal 2 00:30:03.092 19:39:01 -- scripts/common.sh@352 -- # local d=2 00:30:03.092 19:39:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.092 19:39:01 -- scripts/common.sh@354 -- # echo 2 00:30:03.092 19:39:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:03.092 19:39:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:03.092 19:39:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:03.092 19:39:01 -- scripts/common.sh@367 -- # return 0 00:30:03.092 19:39:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.092 19:39:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:03.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.092 --rc genhtml_branch_coverage=1 00:30:03.092 --rc genhtml_function_coverage=1 00:30:03.092 --rc genhtml_legend=1 00:30:03.092 --rc geninfo_all_blocks=1 00:30:03.092 --rc geninfo_unexecuted_blocks=1 00:30:03.092 00:30:03.092 ' 00:30:03.092 19:39:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:03.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.092 --rc genhtml_branch_coverage=1 00:30:03.092 --rc genhtml_function_coverage=1 00:30:03.092 --rc genhtml_legend=1 00:30:03.092 --rc geninfo_all_blocks=1 00:30:03.092 --rc geninfo_unexecuted_blocks=1 00:30:03.092 00:30:03.092 ' 00:30:03.092 19:39:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:03.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.092 --rc genhtml_branch_coverage=1 00:30:03.092 --rc genhtml_function_coverage=1 00:30:03.092 --rc genhtml_legend=1 00:30:03.092 --rc geninfo_all_blocks=1 00:30:03.092 --rc geninfo_unexecuted_blocks=1 00:30:03.092 00:30:03.092 ' 00:30:03.092 19:39:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:03.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.092 --rc genhtml_branch_coverage=1 00:30:03.092 --rc genhtml_function_coverage=1 00:30:03.092 --rc genhtml_legend=1 00:30:03.092 --rc geninfo_all_blocks=1 00:30:03.092 --rc geninfo_unexecuted_blocks=1 00:30:03.092 00:30:03.092 ' 00:30:03.092 19:39:01 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.092 19:39:01 -- nvmf/common.sh@7 -- # uname -s 00:30:03.092 19:39:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.092 19:39:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.092 19:39:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.092 19:39:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.092 19:39:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.092 19:39:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.092 19:39:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.092 19:39:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.092 19:39:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.092 19:39:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.092 19:39:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.092 19:39:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.092 19:39:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.092 19:39:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.092 19:39:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.093 19:39:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.093 19:39:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.093 19:39:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.093 19:39:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.093 19:39:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.093 19:39:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.093 19:39:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.093 19:39:01 -- paths/export.sh@5 -- # export PATH 00:30:03.093 19:39:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.093 19:39:01 -- nvmf/common.sh@46 -- # : 0 00:30:03.093 19:39:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:03.093 19:39:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:03.093 19:39:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:03.093 19:39:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.093 19:39:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.093 19:39:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:03.093 19:39:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:03.093 19:39:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:03.093 19:39:01 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:03.093 19:39:01 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:03.093 19:39:01 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:03.093 19:39:01 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:30:03.093 19:39:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:03.093 19:39:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.093 19:39:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:03.093 19:39:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:03.093 19:39:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:03.093 19:39:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.093 19:39:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.093 19:39:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.093 19:39:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:03.093 19:39:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:03.093 19:39:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:03.093 19:39:01 -- common/autotest_common.sh@10 -- # set +x 00:30:04.994 19:39:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:04.994 19:39:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:04.994 19:39:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:04.994 19:39:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:04.994 19:39:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:04.994 19:39:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:04.994 19:39:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:04.994 19:39:03 -- nvmf/common.sh@294 -- # net_devs=() 00:30:04.994 19:39:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:04.994 19:39:03 -- nvmf/common.sh@295 -- # e810=() 00:30:04.994 19:39:03 -- nvmf/common.sh@295 -- # local -ga e810 00:30:04.994 19:39:03 -- nvmf/common.sh@296 -- # x722=() 00:30:04.994 19:39:03 -- nvmf/common.sh@296 -- # local -ga x722 00:30:04.994 19:39:03 -- nvmf/common.sh@297 -- # mlx=() 00:30:04.994 19:39:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:04.994 19:39:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.994 19:39:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.994 19:39:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.994 19:39:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.995 19:39:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.995 19:39:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.995 19:39:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.995 19:39:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.995 19:39:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.995 19:39:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.995 19:39:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.995 19:39:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:04.995 19:39:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:04.995 19:39:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:04.995 19:39:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:04.995 19:39:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:04.995 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:04.995 19:39:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:04.995 19:39:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:04.995 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:04.995 19:39:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:04.995 19:39:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:04.995 19:39:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.995 19:39:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:04.995 19:39:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.995 19:39:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:04.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:04.995 19:39:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.995 19:39:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:04.995 19:39:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.995 19:39:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:04.995 19:39:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.995 19:39:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:04.995 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:04.995 19:39:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.995 19:39:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:04.995 19:39:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:04.995 19:39:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:04.995 19:39:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.995 19:39:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.995 19:39:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.995 19:39:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:04.995 19:39:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.995 19:39:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.995 19:39:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:04.995 19:39:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.995 19:39:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.995 19:39:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:04.995 19:39:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:04.995 19:39:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.995 19:39:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.995 19:39:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.995 19:39:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.995 19:39:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:04.995 19:39:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.995 19:39:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.995 19:39:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.995 19:39:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:04.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:30:04.995 00:30:04.995 --- 10.0.0.2 ping statistics --- 00:30:04.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.995 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:30:04.995 19:39:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:30:04.995 00:30:04.995 --- 10.0.0.1 ping statistics --- 00:30:04.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.995 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:30:04.995 19:39:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.995 19:39:03 -- nvmf/common.sh@410 -- # return 0 00:30:04.995 19:39:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:04.995 19:39:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.995 19:39:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:04.995 19:39:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.995 19:39:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:04.995 19:39:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:05.253 19:39:03 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:05.253 19:39:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:05.253 19:39:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:05.253 19:39:03 -- common/autotest_common.sh@10 -- # set +x 00:30:05.253 ************************************ 00:30:05.253 START TEST nvmf_target_disconnect_tc1 00:30:05.253 ************************************ 00:30:05.253 19:39:03 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:30:05.253 19:39:03 -- host/target_disconnect.sh@32 -- # set +e 00:30:05.253 19:39:03 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.253 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.253 [2024-11-17 19:39:03.349290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-17 19:39:03.349548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.253 [2024-11-17 19:39:03.349576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e2270 with addr=10.0.0.2, port=4420 00:30:05.253 [2024-11-17 19:39:03.349622] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:05.253 [2024-11-17 19:39:03.349648] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:05.253 [2024-11-17 19:39:03.349663] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:05.253 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:05.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:05.253 Initializing NVMe Controllers 00:30:05.253 19:39:03 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:05.253 19:39:03 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:05.253 19:39:03 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:30:05.253 19:39:03 -- common/autotest_common.sh@1142 -- # return 0 00:30:05.253 19:39:03 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:05.253 19:39:03 -- host/target_disconnect.sh@41 -- # set -e 00:30:05.253 00:30:05.253 real 0m0.086s 00:30:05.253 user 0m0.036s 00:30:05.253 sys 0m0.050s 00:30:05.253 19:39:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:05.253 19:39:03 -- common/autotest_common.sh@10 -- # set +x 00:30:05.253 ************************************ 00:30:05.253 END TEST nvmf_target_disconnect_tc1 00:30:05.253 ************************************ 00:30:05.253 19:39:03 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:05.253 19:39:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:05.253 19:39:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:05.253 19:39:03 -- common/autotest_common.sh@10 -- # set +x 00:30:05.253 ************************************ 00:30:05.253 START TEST nvmf_target_disconnect_tc2 00:30:05.253 ************************************ 00:30:05.253 19:39:03 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:30:05.253 19:39:03 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:05.253 19:39:03 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:05.253 19:39:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:05.253 19:39:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:05.253 19:39:03 -- common/autotest_common.sh@10 -- # set +x 00:30:05.253 19:39:03 -- nvmf/common.sh@469 -- # nvmfpid=1331052 00:30:05.253 19:39:03 -- nvmf/common.sh@470 -- # waitforlisten 1331052 00:30:05.253 19:39:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:05.253 19:39:03 -- common/autotest_common.sh@829 -- # '[' -z 1331052 ']' 00:30:05.253 19:39:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.253 19:39:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.253 19:39:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.253 19:39:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.253 19:39:03 -- common/autotest_common.sh@10 -- # set +x 00:30:05.253 [2024-11-17 19:39:03.430967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:05.253 [2024-11-17 19:39:03.431083] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.253 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.253 [2024-11-17 19:39:03.497815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.510 [2024-11-17 19:39:03.585776] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:05.510 [2024-11-17 19:39:03.585936] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.511 [2024-11-17 19:39:03.585962] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.511 [2024-11-17 19:39:03.585975] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.511 [2024-11-17 19:39:03.586174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.511 [2024-11-17 19:39:03.586316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.511 [2024-11-17 19:39:03.586416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.511 [2024-11-17 19:39:03.586462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:06.444 19:39:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:06.444 19:39:04 -- common/autotest_common.sh@862 -- # return 0 00:30:06.444 19:39:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:06.444 19:39:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:06.444 19:39:04 -- common/autotest_common.sh@10 -- # set +x 00:30:06.444 19:39:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.444 19:39:04 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.444 19:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.444 19:39:04 -- common/autotest_common.sh@10 -- # set +x 00:30:06.444 Malloc0 00:30:06.444 19:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.444 19:39:04 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:06.444 19:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.444 19:39:04 -- common/autotest_common.sh@10 -- # set +x 00:30:06.444 [2024-11-17 19:39:04.500818] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.444 19:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.444 19:39:04 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.444 19:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.444 19:39:04 -- common/autotest_common.sh@10 -- # set +x 00:30:06.444 19:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.444 19:39:04 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.444 19:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.444 19:39:04 -- common/autotest_common.sh@10 -- # set +x 00:30:06.444 19:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.444 19:39:04 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.444 19:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.444 19:39:04 -- common/autotest_common.sh@10 -- # set +x 00:30:06.444 [2024-11-17 19:39:04.529115] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.444 19:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.444 19:39:04 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.444 19:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.444 19:39:04 -- common/autotest_common.sh@10 -- # set +x 00:30:06.444 19:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.444 19:39:04 -- host/target_disconnect.sh@50 -- # reconnectpid=1331212 00:30:06.444 19:39:04 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:06.444 19:39:04 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:06.444 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.351 19:39:06 -- host/target_disconnect.sh@53 -- # kill -9 1331052 00:30:08.351 19:39:06 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Write completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.351 starting I/O failed 00:30:08.351 [2024-11-17 19:39:06.554931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.351 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 [2024-11-17 19:39:06.555227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Read completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 Write completed with error (sct=0, sc=8) 00:30:08.352 starting I/O failed 00:30:08.352 [2024-11-17 19:39:06.555530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.352 [2024-11-17 19:39:06.555748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.555873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.555901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.556050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.556160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.556194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.556330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.556420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.556447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.556566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.556666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.556699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.556794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.556895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.556921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.557051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.557177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.557204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.557340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.557462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.557487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.557601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.557732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.557758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.557853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.557983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.558009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.558123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.558241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.558266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.558379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.558500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.558527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.558617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.558732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.558758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.558864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.558979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.352 [2024-11-17 19:39:06.559005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.352 qpair failed and we were unable to recover it. 00:30:08.352 [2024-11-17 19:39:06.559152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.559238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.559265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.559464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.559574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.559603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.559701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.559787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.559817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.559908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.560070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.560096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.560186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.560356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.560398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.560523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.560622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.560651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.560771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.560857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.560883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.560984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.561105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.561130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.561251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.561346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.561395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.561525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.561658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.561709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.561823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.561908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.561933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.562075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.562213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.562239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.562412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.562532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.562560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.562668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.562774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.562799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.562885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.563006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.563033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.563118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.563234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.563259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.563368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.563477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.563502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.563634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.563766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.563792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.563902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.563993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.564017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.564102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.564187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.564212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.564289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.564399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.564423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.564498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.564602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.564626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.564744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.564844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.564873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.564976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.565063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.565089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.565197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.565285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.565312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.565426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.565570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.565596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.565685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.565781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.565808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.565901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.565981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.566010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.353 [2024-11-17 19:39:06.566154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.566263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.353 [2024-11-17 19:39:06.566289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.353 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.566411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.566524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.566550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.566631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.566715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.566741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.566879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.566996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.567023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.567111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.567251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.567277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.567391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.567470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.567495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.567601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.567700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.567738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.567824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.567911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.567937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.568044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.568168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.568193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.568308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.568420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.568445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.568526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.568612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.568640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.568757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.568853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.568879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.568967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.569076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.569101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.569216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.569297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.569323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.569436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.569551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.569578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.569672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.569811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.569838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.569934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.570044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.570069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.570183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.570263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.570289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.570396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.570511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.570536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.570685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.570814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.570840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.570936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.571178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.571410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.571656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.571883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.571993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.572108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.572190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.572214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.572303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.572392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.572417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.572530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.572642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.572667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.572770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.572883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.572907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.354 qpair failed and we were unable to recover it. 00:30:08.354 [2024-11-17 19:39:06.573020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.354 [2024-11-17 19:39:06.573129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.573153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.573267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.573418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.573445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.573566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.573657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.573708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.573829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.573913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.573938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.574055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.574134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.574159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.574245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.574322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.574346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.574484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.574595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.574620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.574756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.574845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.574870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.574961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.575047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.575073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.575187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.575333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.575361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.575550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.575709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.575734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.575853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.575944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.575969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.576081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.576188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.576217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.576305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.576426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.576451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.576591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.576697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.576739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.576856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.576970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.576995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.577069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.577175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.577199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.577332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.577424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.577451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.577548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.577705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.577739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.577857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.577973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.577999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.578105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.578262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.578302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.578442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.578549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.578575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.578662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.578773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.578803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.578910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.579174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.579417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.579661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.579884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.579989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.580076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.580165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.355 [2024-11-17 19:39:06.580191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.355 qpair failed and we were unable to recover it. 00:30:08.355 [2024-11-17 19:39:06.580319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.580409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.580437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.580560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.580658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.580691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.580773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.580885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.580909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.580998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.581121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.581164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.581252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.581383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.581413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.581535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.581714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.581740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.581836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.581916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.581941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.582073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.582168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.582194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.582308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.582494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.582519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.582632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.582728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.582754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.582828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.582913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.582953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.583054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.583167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.583207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.583319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.583410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.583436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.583534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.583659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.583706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.583841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.583970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.583999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.584143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.584222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.584249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.584343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.584455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.584482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.584624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.584746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.584774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.584889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.584977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.585004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.585123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.585240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.585267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.585361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.585504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.585530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.585645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.585757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.585783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.585895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.586013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.586038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.586122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.586200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.356 [2024-11-17 19:39:06.586226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.356 qpair failed and we were unable to recover it. 00:30:08.356 [2024-11-17 19:39:06.586342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.586420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.586448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.586595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.586707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.586733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.586852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.586940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.586967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.587078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.587166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.587194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.587334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.587445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.587470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.587554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.587672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.587707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.587820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.587930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.587956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.588061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.588174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.588200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.588312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.588396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.588421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.588539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.588648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.588682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.588824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.588920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.588946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.589028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.589167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.589193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.589277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.589362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.589388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.589529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.589639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.589665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.589779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.589890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.589915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.589994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.590133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.590158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.590269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.590384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.590409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.590555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.590664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.590701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.590822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.590902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.590929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.591019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.591105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.591132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.591240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.591353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.591380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.591491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.591609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.591635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.591765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.591883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.591909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.592022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.592160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.592186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.592270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.592387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.592412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.592529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.592620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.592645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.592767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.592905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.592932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.593038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.593128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.593156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.357 [2024-11-17 19:39:06.593264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.593368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.357 [2024-11-17 19:39:06.593394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.357 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.593482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.593589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.593615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.593756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.593853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.593880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.593970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.594080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.594106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.594220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.594335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.594361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.594440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.594548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.594574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.594694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.594805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.594830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.594945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.595058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.595083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.595175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.595285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.595311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.595402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.595510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.595536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.595643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.595761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.595787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.595901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.596013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.596039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.596156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.596265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.596291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.596377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.596490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.596516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.596661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.596809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.596836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.596955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.597069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.597095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.597207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.597291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.597317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.597457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.597543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.597568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.597691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.597784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.597810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.597926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.598167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.598385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.598614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.598876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.598979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.599073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.599148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.599174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.599287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.599369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.599395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.599477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.599592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.599617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.599731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.599819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.599846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.599957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.600096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.600123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.358 qpair failed and we were unable to recover it. 00:30:08.358 [2024-11-17 19:39:06.600227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.358 [2024-11-17 19:39:06.600342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.600368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.600476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.600593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.600618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.600704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.600787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.600813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.600932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.601045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.601074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.601168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.601279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.601305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.601422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.601556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.601582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.601729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.601810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.601837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.601982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.602095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.602120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.602235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.602346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.602372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.602462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.602548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.602575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.602718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.602800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.602828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.602921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.603178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.603425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.603633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.603881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.603991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.604115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.604251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.604277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.604368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.604448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.604474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.604592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.604708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.604734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.604822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.604907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.604933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.605022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.605112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.605138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.605248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.605360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.605387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.605528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.605613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.605640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.605765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.605878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.605910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.606047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.606157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.606182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.606323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.606404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.606429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.606539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.606643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.606669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.606802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.606917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.606944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.359 qpair failed and we were unable to recover it. 00:30:08.359 [2024-11-17 19:39:06.607023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.607135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.359 [2024-11-17 19:39:06.607161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.607251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.607386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.607412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.607519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.607600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.607626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.607715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.607860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.607885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.608000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.608082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.608108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.608226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.608339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.608365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.608462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.608548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.608574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.608686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.608765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.608791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.608881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.609003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.609029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.609145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.609265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.609291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.609409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.609491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.609517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.609628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.609739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.609765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.609860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.609978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.610004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.610147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.610261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.610287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.610376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.610485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.610512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.610634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.610754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.610782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.610867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.610955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.610981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.611068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.611159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.611185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.611292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.611381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.611407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.611522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.611643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.611668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.611767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.611877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.360 [2024-11-17 19:39:06.611904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.360 qpair failed and we were unable to recover it. 00:30:08.360 [2024-11-17 19:39:06.612024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.612111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.612137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.612217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.612337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.612363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.612456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.612570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.612596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.612735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.612817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.612843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.612933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.613190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.613410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.613605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.613855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.613966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.614073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.614185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.614210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.639 qpair failed and we were unable to recover it. 00:30:08.639 [2024-11-17 19:39:06.614294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.639 [2024-11-17 19:39:06.614417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.614442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.614558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.614660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.614703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.614793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.614879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.614904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.615045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.615153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.615179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.615293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.615439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.615464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.615552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.615660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.615694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.615781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.615896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.615922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.616039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.616116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.616141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.616248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.616389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.616415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.616530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.616669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.616704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.616845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.616962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.616988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.617102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.617188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.617213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.617320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.617437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.617463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.617606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.617718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.617746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.617866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.617980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.618005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.618150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.618254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.618279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.618387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.618470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.618495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.618611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.618718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.618745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.618857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.618938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.618964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.619057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.619160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.619185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.619272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.619388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.619415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.619534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.619620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.619646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.619766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.619860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.619886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.619964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.620047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.620072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.620160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.620246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.620271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.620414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.620551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.620576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.620719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.620830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.620856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.620967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.621177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.621393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.621619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.621833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.621996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.622082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.622191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.622216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.622328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.622414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.622440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.640 qpair failed and we were unable to recover it. 00:30:08.640 [2024-11-17 19:39:06.622528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.640 [2024-11-17 19:39:06.622617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.622642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.622794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.622902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.622928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.623054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.623134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.623161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.623279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.623389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.623415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.623500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.623612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.623637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.623759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.623863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.623889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.624000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.624088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.624114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.624207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.624333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.624359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.624498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.624643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.624669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.624763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.624881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.624906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.625000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.625112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.625139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.625261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.625387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.625413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.625539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.625628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.625656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.625757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.625876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.625902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.626024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.626143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.626172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.626297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.626380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.626408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.626510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.626624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.626651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.626820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.626901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.626928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.627069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.627154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.627181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.627315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.627467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.627493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.627578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.627722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.627749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.627895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.628131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.628330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.628564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.628801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.628941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.629028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.629135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.629161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.629247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.629361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.629387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.641 [2024-11-17 19:39:06.629504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.629582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.641 [2024-11-17 19:39:06.629607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.641 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.629718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.629808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.629834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.629918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.630152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.630405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.630598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.630840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.630951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.631070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.631183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.631208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.631317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.631460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.631485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.631599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.631705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.631731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.631846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.631986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.632011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.632139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.632222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.632246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.632360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.632467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.632492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.632612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.632695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.632726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.632846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.632991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.633020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.633141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.633290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.633333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.633423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.633531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.633557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.633698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.633787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.633813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.633894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.634141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.634371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.634608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.634844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.634955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.635063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.635179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.635210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.635353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.635495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.635521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.635669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.635799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.635844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.635967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.636110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.636136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.636225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.636316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.636342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.636459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.636541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.636567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.642 qpair failed and we were unable to recover it. 00:30:08.642 [2024-11-17 19:39:06.636686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.642 [2024-11-17 19:39:06.636772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.636798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.636906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.637014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.637040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.637150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.637290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.637315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.637451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.637591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.637617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.637752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.637900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.637949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.638111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.638246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.638273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.638368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.638451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.638477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.638586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.638710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.638736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.638823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.638898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.638924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.639062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.639141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.639167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.639243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.639358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.639384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.639455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.639559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.639585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.639728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.639809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.639834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.639972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.640109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.640134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.640277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.640392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.640418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.640557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.640672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.640705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.640845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.640960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.640987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.641075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.641182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.641208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.641348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.641431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.641458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.641575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.641689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.641715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.641827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.641965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.641991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.642075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.642170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.642195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.642286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.642401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.642428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.642541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.642648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.642681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.642800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.642915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.642943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.643058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.643170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.643196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.643 qpair failed and we were unable to recover it. 00:30:08.643 [2024-11-17 19:39:06.643333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.643414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.643 [2024-11-17 19:39:06.643439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.643553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.643643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.643669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.643821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.643964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.643990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.644104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.644219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.644244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.644325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.644417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.644444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.644528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.644643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.644669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.644781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.644895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.644921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.645033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.645172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.645197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.645307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.645421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.645445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.645562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.645644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.645671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.645788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.645918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.645945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.646116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.646229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.646255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.646381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.646475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.646515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.646620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.646726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.646751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.646835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.646966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.646995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.647184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.647342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.647367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.647450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.647532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.647557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.647642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.647758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.647784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.647894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.648029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.648054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.648136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.648247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.648271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.648418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.648559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.648584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.648703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.648842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.648867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.649012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.649152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.649177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.649310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.649452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.649494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.649653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.649740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.649766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.649852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.649966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.649991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.650129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.650271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.650299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.644 qpair failed and we were unable to recover it. 00:30:08.644 [2024-11-17 19:39:06.650404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.644 [2024-11-17 19:39:06.650571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.650596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.650694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.650779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.650804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.650921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.651084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.651113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.651303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.651439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.651464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.651581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.651661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.651694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.651783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.651897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.651922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.652063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.652142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.652167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.652303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.652444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.652471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.652584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.652755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.652781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.652891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.652972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.652999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.653118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.653231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.653256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.653330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.653437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.653461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.653564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.653699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.653730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.653848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.653919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.653945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.654113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.654270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.654297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.654437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.654530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.654557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.654669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.654756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.654782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.654874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.654980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.655004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.655131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.655247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.655272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.655395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.655486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.655527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.655645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.655736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.655761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.655850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.655956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.655981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.656080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.656170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.656198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.656283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.656382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.656410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.656519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.656656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.656690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.645 [2024-11-17 19:39:06.656848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.656933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.645 [2024-11-17 19:39:06.656959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.645 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.657077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.657214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.657239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.657364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.657487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.657514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.657641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.657753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.657779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.657890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.658016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.658044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.658151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.658256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.658281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.658434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.658538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.658563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.658684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.658771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.658796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.658893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.659038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.659064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.659222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.659342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.659367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.659500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.659611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.659639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.659770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.659915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.659940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.660052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.660158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.660183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.660289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.660383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.660421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.660540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.660651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.660684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.660768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.660848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.660872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.660960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.661071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.661096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.661203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.661310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.661335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.661494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.661623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.661651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.661821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.661960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.661985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.662071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.662158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.662183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.662265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.662370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.662393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.662507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.662583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.662607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.662690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.662799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.662824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.662943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.663061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.663088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.663196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.663315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.663357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.663489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.663603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.663628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.663769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.663859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.663884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.663994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.664170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.664195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.646 [2024-11-17 19:39:06.664311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.664389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.646 [2024-11-17 19:39:06.664415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.646 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.664498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.664616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.664641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.664732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.664843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.664867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.664946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.665025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.665050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.665166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.665307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.665335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.665425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.665585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.665610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.665741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.665870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.665896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.666024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.666151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.666176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.666283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.666420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.666445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.666561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.666651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.666689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.666776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.666860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.666885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.666970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.667110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.667135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.667254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.667337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.667362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.667466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.667545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.667569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.667706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.667805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.667832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.667965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.668054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.668078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.668180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.668302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.668330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.668445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.668566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.668594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.668735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.668821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.668845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.668923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.669012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.669038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.669133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.669273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.669298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.669408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.669488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.669513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.669624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.669715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.669741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.669888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.670006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.670031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.670146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.670230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.670254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.670394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.670532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.670558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.670694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.670808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.670833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.670916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.670999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.671023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.671146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.671223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.671248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.647 qpair failed and we were unable to recover it. 00:30:08.647 [2024-11-17 19:39:06.671341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.647 [2024-11-17 19:39:06.671481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.671508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.671636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.671784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.671810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.671915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.672018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.672046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.672203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.672287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.672313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.672451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.672589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.672614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.672774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.672890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.672915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.673022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.673132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.673156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.673271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.673383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.673408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.673498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.673583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.673607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.673734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.673851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.673877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.673951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.674027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.674052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.674159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.674267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.674294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.674403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.674516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.674540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.674645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.674813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.674840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.674954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.675066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.675090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.675170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.675309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.675333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.675449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.675556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.675580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.675697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.675807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.675832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.675945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.676078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.676103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.676269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.676358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.676383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.676498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.676600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.676628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.676750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.676839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.676868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.676952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.677032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.677056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.677170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.677242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.677267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.677354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.677485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.677513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.677648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.677773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.677799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.677905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.678061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.678088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.678203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.678330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.678357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.648 [2024-11-17 19:39:06.678457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.678563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.648 [2024-11-17 19:39:06.678588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.648 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.678722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.678809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.678837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.678922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.679067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.679094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.679224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.679312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.679337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.679471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.679561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.679585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.679706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.679798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.679840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.679927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.680028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.680052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.680160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.680271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.680313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.680446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.680543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.680570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.680724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.680804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.680830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.680944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.681077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.681120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.681244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.681392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.681420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.681524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.681635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.681660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.681753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.681889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.681917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.682048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.682137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.682163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.682242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.682346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.682371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.682507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.682605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.682630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.682727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.682826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.682853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.682993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.683081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.683105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.683222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.683305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.683330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.683482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.683563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.683591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.683696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.683818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.683843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.683971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.684128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.684155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.684279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.684385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.684409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.684522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.684637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.684664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.684807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.684923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.684949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.685061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.685201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.685226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.685309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.685421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.685447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.649 qpair failed and we were unable to recover it. 00:30:08.649 [2024-11-17 19:39:06.685523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.649 [2024-11-17 19:39:06.685631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.685656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.685747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.685833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.685857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.686040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.686179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.686204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.686317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.686444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.686472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.686599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.686744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.686770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.686885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.686962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.686987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.687093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.687211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.687236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.687341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.687451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.687479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.687604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.687767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.687793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.687907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.688032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.688060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.688211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.688361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.688402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.688515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.688643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.688671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.688807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.688943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.688968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.689047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.689221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.689262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.689421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.689554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.689579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.689714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.689864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.689891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.690042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.690138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.690170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.690283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.690364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.690389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.690496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.690611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.690654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.690791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.690937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.690965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.691106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.691210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.691235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.691365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.691479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.691520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.691633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.691807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.691833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.650 qpair failed and we were unable to recover it. 00:30:08.650 [2024-11-17 19:39:06.691957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.650 [2024-11-17 19:39:06.692040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.692066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.692155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.692257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.692284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.692445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.692553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.692578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.692694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.692804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.692829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.692914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.693031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.693055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.693157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.693229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.693253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.693365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.693477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.693502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.693625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.693737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.693765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.693883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.693996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.694024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.694130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.694208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.694232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.694325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.694406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.694431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.694566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.694699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.694727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.694861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.694941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.694966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.695083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.695167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.695206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.695357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.695469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.695496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.695612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.695699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.695724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.695812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.695926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.695951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.696117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.696244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.696270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.696408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.696517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.696541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.696666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.696810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.696835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.696943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.697022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.697047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.697183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.697269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.697295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.697411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.697499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.697523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.697687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.697804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.697832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.697970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.698081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.698106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.698217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.698319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.698343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.698454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.698565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.698593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.698730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.698808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.698833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.698926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.699062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.651 [2024-11-17 19:39:06.699089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.651 qpair failed and we were unable to recover it. 00:30:08.651 [2024-11-17 19:39:06.699230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.699332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.699357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.699477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.699550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.699574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.699727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.699848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.699876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.699992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.700085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.700112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.700216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.700322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.700346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.700435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.700574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.700601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.700729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.700854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.700881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.701013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.701123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.701149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.701283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.701378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.701419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.701558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.701643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.701691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.701801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.701920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.701944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.702108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.702257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.702284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.702406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.702499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.702527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.702650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.702770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.702796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.702880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.702995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.703020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.703107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.703287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.703319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.703400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.703513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.703539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.703696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.703821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.703849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.703973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.704057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.704084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.704217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.704296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.704320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.704450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.704570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.704599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.704714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.704840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.704868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.704986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.705071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.705096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.705222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.705335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.705361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.705500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.705604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.705628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.705750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.705858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.705883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.705999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.706094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.706122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.706274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.706391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.706418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.706554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.706700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.652 [2024-11-17 19:39:06.706725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.652 qpair failed and we were unable to recover it. 00:30:08.652 [2024-11-17 19:39:06.706859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.707132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.707364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.707550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.707814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.707945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.708069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.708176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.708200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.708306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.708417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.708442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.708552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.708664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.708697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.708809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.708893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.708919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.709006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.709086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.709114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.709222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.709371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.709397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.709480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.709621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.709646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.709755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.709870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.709895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.709972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.710080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.710105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.710212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.710384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.710426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.710557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.710686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.710732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.710816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.710897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.710922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.711010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.711128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.711170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.711306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.711442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.711467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.711578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.711743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.711769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.711857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.711992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.712017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.712154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.712274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.712301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.712429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.712565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.712589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.712698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.712778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.712804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.712923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.713014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.713039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.713142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.713252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.713277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.713383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.713493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.713521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.713617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.713752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.713782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.653 [2024-11-17 19:39:06.713897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.713981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.653 [2024-11-17 19:39:06.714006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.653 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.714114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.714226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.714251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.714410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.714538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.714566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.714707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.714815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.714840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.714927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.715169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.715384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.715599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.715842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.715977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.716079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.716192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.716217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.716371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.716528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.716556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.716686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.716824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.716849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.717001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.717134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.717159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.717272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.717363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.717407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.717533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.717620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.717645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.717769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.717888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.717913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.717994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.718135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.718160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.718240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.718323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.718348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.718462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.718591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.718618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.718751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.718864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.718889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.718978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.719115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.719140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.719253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.719366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.719394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.719498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.719584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.719613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.719748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.719867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.719892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.719973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.720134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.720162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.654 [2024-11-17 19:39:06.720284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.720405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.654 [2024-11-17 19:39:06.720433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.654 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.720555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.720645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.720697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.720784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.720890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.720915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.721029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.721112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.721137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.721239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.721338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.721366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.721475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.721585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.721626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.721748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.721830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.721855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.721971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.722105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.722130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.722243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.722395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.722422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.722507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.722585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.722610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.722697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.722788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.722812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.722929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.723038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.723063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.723220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.723316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.723344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.723463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.723587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.723615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.723783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.723895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.723921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.724068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.724175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.724204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.724307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.724429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.724458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.724616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.724714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.724741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.724821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.724910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.724937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.725072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.725176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.725203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.725385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.725513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.725541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.725663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.725781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.725806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.725889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.726031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.726057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.726161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.726286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.726314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.726464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.726571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.726612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.726702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.726840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.726870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.726983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.727100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.727124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.727256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.727377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.727404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.727531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.727650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.727726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.655 qpair failed and we were unable to recover it. 00:30:08.655 [2024-11-17 19:39:06.727869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.728006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.655 [2024-11-17 19:39:06.728031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.728157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.728310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.728351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.728463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.728601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.728626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.728745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.728826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.728851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.728941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.729071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.729098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.729189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.729320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.729345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.729434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.729571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.729596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.729753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.729853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.729882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.730010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.730157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.730184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.730320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.730433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.730458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.730594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.730746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.730774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.730899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.731026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.731050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.731163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.731253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.731277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.731430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.731553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.731580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.731725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.731817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.731842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.731952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.732062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.732087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.732193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.732302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.732327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.732472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.732574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.732601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.732715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.732798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.732822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.732946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.733101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.733128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.733250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.733377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.733401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.733545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.733685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.733725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.733828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.733949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.733976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.734070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.734193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.734220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.734352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.734435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.734461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.734599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.734717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.734742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.734874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.735031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.735056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.735161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.735302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.735327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.735499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.735610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.735637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.656 qpair failed and we were unable to recover it. 00:30:08.656 [2024-11-17 19:39:06.735798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.656 [2024-11-17 19:39:06.735947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.735975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.736110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.736224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.736249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.736359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.736471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.736495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.736581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.736719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.736747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.736879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.736957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.736981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.737102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.737179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.737205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.737370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.737494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.737520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.737654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.737773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.737798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.737906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.738026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.738051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.738211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.738329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.738357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.738463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.738576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.738600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.738725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.738841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.738870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.738999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.739076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.739100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.739228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.739339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.739364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.739454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.739542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.739566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.739693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.739779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.739804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.739924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.740009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.740034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.740194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.740286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.740314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.740435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.740581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.740628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.740722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.740830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.740855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.740932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.741070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.741094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.741209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.741335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.741363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.741477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.741564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.741588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.741686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.741849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.741877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.742012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.742155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.742180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.742292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.742403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.742428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.742560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.742710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.742751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.742863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.742948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.742973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.743081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.743192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.743216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.657 qpair failed and we were unable to recover it. 00:30:08.657 [2024-11-17 19:39:06.743373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.657 [2024-11-17 19:39:06.743462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.743487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.743579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.743701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.743730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.743849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.743956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.743982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.744097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.744175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.744200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.744355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.744465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.744489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.744578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.744727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.744753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.744880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.745004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.745031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.745168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.745288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.745314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.745418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.745527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.745552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.745633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.745751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.745776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.745872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.746012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.746040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.746180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.746295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.746320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.746448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.746536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.746564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.746689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.746780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.746809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.746902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.747018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.747044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.747167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.747265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.747305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.747392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.747505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.747547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.747658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.747780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.747805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.747934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.748023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.748053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.748152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.748276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.748317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.748456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.748550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.748574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.748702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.748858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.748885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.749008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.749112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.749137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.749246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.749384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.749408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.749573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.749653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.749684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.749841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.749926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.749954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.750059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.750175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.750198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.750308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.750403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.750431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.750556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.750663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.658 [2024-11-17 19:39:06.750701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.658 qpair failed and we were unable to recover it. 00:30:08.658 [2024-11-17 19:39:06.750871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.751012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.751051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.751144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.751266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.751310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.751419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.751554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.751578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.751724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.751866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.751891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.752003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.752100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.752143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.752259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.752384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.752412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.752549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.752658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.752689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.752829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.752950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.752977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.753106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.753213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.753238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.753355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.753435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.753460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.753587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.753707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.753736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.753836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.753959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.753987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.754151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.754289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.754313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.754414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.754527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.754554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.754695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.754788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.754813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.754939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.755082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.755107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.755240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.755332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.755359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.755448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.755587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.755612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.755727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.755839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.755864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.756013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.756167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.756192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.756301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.756412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.756437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.756544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.756650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.756681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.756802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.756905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.756929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.757011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.757145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.757170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.659 [2024-11-17 19:39:06.757255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.757364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.659 [2024-11-17 19:39:06.757389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.659 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.757520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.757641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.757690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.757800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.757881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.757905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.758017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.758156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.758181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.758260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.758372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.758396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.758484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.758584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.758608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.758700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.758783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.758809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.758889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.759045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.759073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.759175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.759296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.759325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.759478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.759569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.759595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.759717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.759840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.759867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.759947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.760070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.760099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.760241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.760376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.760401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.760508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.760614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.760656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.760760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.760838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.760863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.761016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.761139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.761164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.761266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.761385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.761413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.761539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.761666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.761730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.761834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.761916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.761940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.762053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.762162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.762186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.762299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.762415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.762439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.762551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.762637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.762662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.762763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.762872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.762896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.763001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.763114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.763139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.763251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.763364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.763389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.763562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.763670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.763703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.763812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.763891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.763916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.764057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.764144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.764167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.764295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.764385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.764431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.660 qpair failed and we were unable to recover it. 00:30:08.660 [2024-11-17 19:39:06.764552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.764627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.660 [2024-11-17 19:39:06.764652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.764777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.764888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.764912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.764991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.765095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.765120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.765202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.765346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.765372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.765516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.765599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.765623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.765728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.765864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.765889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.766005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.766103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.766131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.766238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.766370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.766394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.766519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.766638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.766665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.766789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.766888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.766915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.767046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.767154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.767178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.767288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.767372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.767397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.767517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.767627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.767652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.767797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.767881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.767906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.768026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.768152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.768179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.768265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.768394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.768421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.768593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.768687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.768731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.768858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.768947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.768972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.769062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.769216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.769243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.769395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.769515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.769543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.769659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.769836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.769861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.769989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.770077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.770105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.770214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.770374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.770402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.770518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.770667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.770722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.770815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.770908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.770951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.771062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.771177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.771201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.771332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.771432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.771459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.771593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.771730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.771755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.771831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.771951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.771976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.661 qpair failed and we were unable to recover it. 00:30:08.661 [2024-11-17 19:39:06.772103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.772223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.661 [2024-11-17 19:39:06.772249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.772342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.772476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.772504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.772608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.772746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.772771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.772863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.772953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.772996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.773120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.773265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.773292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.773445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.773564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.773592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.773731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.773814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.773838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.773921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.774026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.774054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.774181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.774332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.774359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.774458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.774582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.774609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.774741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.774850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.774874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.774957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.775047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.775071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.775156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.775287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.775315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.775432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.775523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.775549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.775698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.775801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.775825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.775935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.776009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.776033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.776128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.776253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.776280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.776463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.776620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.776648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.776771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.776888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.776912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.777047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.777141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.777168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.777271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.777402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.777431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.777517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.777609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.777641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.777787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.777869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.777893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.777984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.778121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.778144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.778253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.778377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.778404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.778481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.778598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.778626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.778712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.778805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.778830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.778911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.779060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.779088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.779249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.779373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.779401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.662 qpair failed and we were unable to recover it. 00:30:08.662 [2024-11-17 19:39:06.779587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.662 [2024-11-17 19:39:06.779712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.779754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.779847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.779974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.780000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.780123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.780217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.780244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.780440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.780584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.780611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.780740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.780829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.780853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.780948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.781076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.781103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.781238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.781373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.781401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.781523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.781649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.781683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.781848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.782011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.782038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.782167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.782303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.782327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.782486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.782634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.782660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.782773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.782854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.782880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.782969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.783054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.783078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.783202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.783330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.783357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.783479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.783565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.783591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.783764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.783874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.783899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.784033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.784152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.784178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.784300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.784421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.784448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.784570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.784659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.784709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.784818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.784907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.784933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.785083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.785232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.785255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.785380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.785529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.785556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.785651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.785839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.785865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.785947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.786060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.786087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.786206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.786327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.786354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.786450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.786563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.786590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.663 qpair failed and we were unable to recover it. 00:30:08.663 [2024-11-17 19:39:06.786733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.663 [2024-11-17 19:39:06.786815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.786840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.786954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.787039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.787063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.787174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.787257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.787282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.787395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.787520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.787548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.787701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.787820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.787847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.787964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.788076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.788101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.788228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.788349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.788376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.788527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.788680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.788729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.788872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.788988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.789025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.789143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.789288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.789325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.789535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.789654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.789749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.789923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.790044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.790079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.790224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.790334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.790363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.790550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.790701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.790746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.790857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.790943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.790985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.791068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.791165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.791193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.791317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.791430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.791457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.791573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.791706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.791732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.791819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.791911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.791951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.792061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.792191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.792219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.792315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.792439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.792467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.792559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.792672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.792723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.792865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.793002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.793026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.793163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.793305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.793330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.793471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.793592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.793620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.793758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.793842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.793868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.793981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.794135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.794163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.794298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.794443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.794471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.664 qpair failed and we were unable to recover it. 00:30:08.664 [2024-11-17 19:39:06.794631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.794758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.664 [2024-11-17 19:39:06.794783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.794896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.795038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.795063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.795177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.795312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.795336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.795422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.795566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.795590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.795742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.795851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.795875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.795992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.796126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.796154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.796284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.796391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.796416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.796552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.796686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.796714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.796810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.796922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.796949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.797064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.797149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.797174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Write completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 Read completed with error (sct=0, sc=8) 00:30:08.665 starting I/O failed 00:30:08.665 [2024-11-17 19:39:06.797512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.665 [2024-11-17 19:39:06.797707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.797826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.797855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.797981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.798110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.798136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.798251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.798372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.798401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.798528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.798655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.798693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.798835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.798912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.798938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.799032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.799117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.799144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.799298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.799380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.799406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.799485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.799592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.799618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.799745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.799894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.799919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.800079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.800245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.800270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.800387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.800480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.800505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.665 qpair failed and we were unable to recover it. 00:30:08.665 [2024-11-17 19:39:06.800657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.665 [2024-11-17 19:39:06.800755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.800784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.800877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.801009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.801035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.801177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.801259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.801286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.801428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.801564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.801589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.801688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.801816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.801844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.801986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.802099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.802124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.802311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.802465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.802511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.802670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.802792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.802817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.802969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.803052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.803078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.803220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.803318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.803346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.803463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.803581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.803609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.803714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.803793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.803818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.803928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.804050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.804077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.804251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.804328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.804353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.804453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.804549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.804576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.804688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.804830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.804855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.804966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.805119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.805144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.805252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.805373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.805407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.805570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.805727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.805780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.805897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.806059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.806096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.806241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.806386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.806421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.806559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.806749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.806786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.806927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.807074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.807110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.807251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.807348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.807384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.807526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.807661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.807710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.807869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.807998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.808027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.808147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.808232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.808258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.808378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.808474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.808502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.666 [2024-11-17 19:39:06.808646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.808770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.666 [2024-11-17 19:39:06.808799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.666 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.808917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.809026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.809052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.809182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.809300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.809343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.809423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.809534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.809559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.809646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.809765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.809791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.809901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.810031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.810062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.810200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.810300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.810326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.810415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.810495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.810521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.810611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.810724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.810752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.810868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.810975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.811005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.811144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.811281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.811306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.811399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.811483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.811508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.811596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.811737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.811762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.811871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.811956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.811980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.812067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.812212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.812241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.812364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.812489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.812519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.812631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.812729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.812755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.812838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.812927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.812953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.813029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.813141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.813166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.813255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.813369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.813394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.813538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.813692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.813737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.813823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.813933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.813958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.814052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.814230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.814259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.814395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.814515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.814542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.814639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.814797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.814824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.814937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.815017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.815042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.815126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.815226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.815260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.815360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.815467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.815493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.815582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.815721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.815750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.667 qpair failed and we were unable to recover it. 00:30:08.667 [2024-11-17 19:39:06.815893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.667 [2024-11-17 19:39:06.815983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.816008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.816098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.816204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.816229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.816352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.816443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.816469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.816568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.816682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.816708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.816787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.816874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.816901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.816988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.817074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.817101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.817217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.817293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.817318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.817410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.817494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.817525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.817669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.817803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.817831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.817961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.818205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.818437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.818664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.818858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.818994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.819075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.819158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.819183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.819310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.819431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.819459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.819608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.819706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.819735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.819831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.819962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.819993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.820108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.820197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.820224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.820316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.820398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.820424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.820533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.820657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.820708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.820797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.820880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.820907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.821024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.821134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.821159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.821285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.821373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.821401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.821498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.821603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.821628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.668 qpair failed and we were unable to recover it. 00:30:08.668 [2024-11-17 19:39:06.821725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.668 [2024-11-17 19:39:06.821818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.821843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.821973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.822058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.822083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.822174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.822288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.822319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.822438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.822576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.822602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.822736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.822826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.822869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.822959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.823047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.823072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.823184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.823312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.823340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.823434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.823526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.823554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.823680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.823809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.823837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.823976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.824055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.824080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.824180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.824269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.824298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.824446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.824569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.824599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.824725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.824812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.824840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.824979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.825093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.825119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.825224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.825316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.825344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.825513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.825599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.825624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.825712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.825826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.825867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.825964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.826052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.826078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.826166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.826250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.826277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.826387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.826488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.826516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.826644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.826789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.826815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.826902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.827146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.827367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.827600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.827869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.827980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.828083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.828181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.828211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.828329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.828426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.828454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.669 [2024-11-17 19:39:06.828544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.828663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.669 [2024-11-17 19:39:06.828698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.669 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.828829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.828945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.828971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.829081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.829171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.829199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.829319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.829405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.829433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.829554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.829705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.829734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.829844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.829952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.829977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.830080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.830174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.830203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.830323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.830416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.830447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.830569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.830666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.830700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.830832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.830918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.830943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.831089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.831213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.831241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.831332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.831432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.831461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.831556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.831681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.831710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.831838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.831923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.831949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.832089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.832176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.832201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.832292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.832409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.832458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.832608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.832757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.832786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.832922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.833159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.833360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.833614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.833869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.833981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.834109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.834227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.834255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.834349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.834469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.834498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.834624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.834710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.834740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.834851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.834934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.834961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.835059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.835169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.835195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.835274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.835401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.835429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.670 [2024-11-17 19:39:06.835514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.835635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.670 [2024-11-17 19:39:06.835662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.670 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.835833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.835922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.835947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.836116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.836218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.836248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.836374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.836457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.836486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.836609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.836709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.836739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.836845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.836984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.837010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.837108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.837247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.837272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.837365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.837494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.837522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.837621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.837719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.837748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.837856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.837974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.837999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.838131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.838250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.838278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.838377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.838497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.838523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.838608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.838723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.838750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.838864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.838980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.839005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.839095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.839173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.839198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.839322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.839433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.839459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.839595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.839721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.839763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.839854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.839938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.839963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.840074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.840183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.840208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.840314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.840432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.840460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.840549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.840710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.840740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.840852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.840936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.840962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.841119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.841255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.841282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.841398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.841504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.841533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.841688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.841825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.841850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.841964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.842074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.842099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.842180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.842302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.842326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.842413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.842519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.842547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.842717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.842858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.842884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.671 qpair failed and we were unable to recover it. 00:30:08.671 [2024-11-17 19:39:06.842973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.671 [2024-11-17 19:39:06.843058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.843083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.843197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.843300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.843342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.843437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.843535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.843562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.843656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.843756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.843784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.843889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.844009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.844035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.844164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.844252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.844280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.844377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.844504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.844532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.844657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.844782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.844810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.844909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.845055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.845080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.845186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.845307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.845335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.845431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.845555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.845585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.845713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.845810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.845837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.845974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.846060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.846086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.846197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.846310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.846339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.846455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.846548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.846578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.846679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.846773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.846801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.846957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.847069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.847096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.847239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.847357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.847385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.847490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.847619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.847648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.847784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.847911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.847939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.848038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.848120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.848145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.848235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.848337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.848365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.848466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.848579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.848607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.848695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.848793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.848821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.848921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.849032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.849058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.849149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.849293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.849318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.849411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.849525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.849553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.849683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.849772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.849801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.672 [2024-11-17 19:39:06.849909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.850004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.672 [2024-11-17 19:39:06.850030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.672 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.850114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.850223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.850251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.850391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.850505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.850530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.850642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.850774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.850804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.850930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.851037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.851062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.851164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.851254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.851282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.851414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.851515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.851544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.851671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.851805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.851833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.851939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.852024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.852049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.852176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.852303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.852328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.852448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.852534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.852560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.852648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.852763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.852792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.852894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.852986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.853011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.853119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.853202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.853227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.853313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.853427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.853452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.853587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.853709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.853739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.853874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.853962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.853987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.854075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.854227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.854253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.854363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.854449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.854492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.854585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.854685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.854714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.854824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.854944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.854973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.855054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.855143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.855169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.855256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.855345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.855371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.855507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.855608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.855633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.855729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.855807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.855832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.673 qpair failed and we were unable to recover it. 00:30:08.673 [2024-11-17 19:39:06.855947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.673 [2024-11-17 19:39:06.856031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.856067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.856144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.856222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.856248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.856386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.856507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.856534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.856619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.856712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.856737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.856825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.856910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.856936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.857038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.857146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.857177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.857262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.857403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.857429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.857548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.857657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.857690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.857793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.857933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.857962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.858082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.858172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.858202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.858292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.858387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.858416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.858541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.858650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.858704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.858830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.858921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.858951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.859042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.859145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.859172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.859296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.859385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.859412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.859524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.859604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.859633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.859767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.859883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.859924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.860008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.860091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.860117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.860231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.860318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.860361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.860526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.860604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.860629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.860744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.860869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.860897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.860986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.861112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.861140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.861272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.861354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.861381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.861471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.861552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.861576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.861694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.861783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.861827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.861915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.862038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.862071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.862162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.862265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.862295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.862396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.862480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.862504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.674 [2024-11-17 19:39:06.862589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.862672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-11-17 19:39:06.862704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.674 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.862817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.862897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.862923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.863059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.863177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.863205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.863312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.863417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.863443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.863522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.863604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.863629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.863759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.863857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.863886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.864008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.864131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.864158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.864267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.864375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.864399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.864495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.864661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.864691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.864788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.864868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.864909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.865034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.865158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.865185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.865294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.865373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.865398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.865525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.865611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.865641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.865796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.865903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.865929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.866065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.866195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.866224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.866330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.866419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.866445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.866584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.866713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.866743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.866841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.866961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.866989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.867094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.867210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.867238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.867401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.867486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.867510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.867620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.867746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.867776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.867871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.867959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.867987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.868076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.868187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.868215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.868317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.868399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.868424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.868506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.868640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.868669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.868815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.868892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.868927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.869077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.869160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.869188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.869298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.869427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.869455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.869565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.869689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.869733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.675 qpair failed and we were unable to recover it. 00:30:08.675 [2024-11-17 19:39:06.869826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-11-17 19:39:06.869933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.869958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.870038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.870129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.870154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.870263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.870350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.870376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.870466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.870553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.870583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.870690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.870777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.870804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.870894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.871014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.871041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.871145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.871241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.871266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.871376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.871462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.871487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.871572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.871689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.871716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.871862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.871993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.872020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.872125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.872207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.872232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.872314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.872459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.872487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.872608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.872731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.872760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.872851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.872945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.872973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.873105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.873195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.873220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.873383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.873481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.873509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.873608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.873692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.873731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.873824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.873951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.873978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.874095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.874206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.874232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.874352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.874451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.874480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.874573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.874710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.874739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.874837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.874963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.874992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.875109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.875199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.875225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.875317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.875474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.875502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.875594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.875699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.875728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.875853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.875951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.875978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.876083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.876193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.876218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.876296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.876406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.876430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.676 [2024-11-17 19:39:06.876521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.876635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-11-17 19:39:06.876661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.676 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.876764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.876871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.876897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.877010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.877114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.877139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.877248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.877366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.877394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.877484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.877598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.877625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.877732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.877855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.877883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.877989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.878102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.878127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.878263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.878353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.878382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.878502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.878621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.878648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.878764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.878897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.878922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.879001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.879115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.879140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.879230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.879307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.879331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.879473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.879562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.879590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.879684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.879776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.879805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.879914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.880035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.880061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.880179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.880262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.880286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.880394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.880523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.880551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.880658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.880778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.880806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.880927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.881011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.881036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.881131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.881234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.881261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.677 qpair failed and we were unable to recover it. 00:30:08.677 [2024-11-17 19:39:06.881377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.881475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-11-17 19:39:06.881503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.881599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.881726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.881754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.881864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.881982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.882006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.882155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.882244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.882271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.882411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.882498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.882524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.882630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.882744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.882770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.882883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.883117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.883347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.883580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.883818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.883956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.884068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.884161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.884187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.884277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.884390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.884430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.884526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.884627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.884655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.884770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.884861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.884886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.884996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.885079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.885104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.678 [2024-11-17 19:39:06.885188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.885297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.678 [2024-11-17 19:39:06.885321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.678 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.885427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.885523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.885550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.885651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.885749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.885774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.885861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.886135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.886342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.886609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.886838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.886937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.887050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.887130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.887154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.887292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.887422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.887446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.887534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.887681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.887706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.887817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.887935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.887964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.888048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.888195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.888223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.888313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.888433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.888460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.888565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.888680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.888706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.888818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.888912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.888939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.889032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.889160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.889185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.889266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.889362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.889389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.889496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.889580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.889605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.889701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.889812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.889840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.955 qpair failed and we were unable to recover it. 00:30:08.955 [2024-11-17 19:39:06.889926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.955 [2024-11-17 19:39:06.890038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.890067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.890155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.890275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.890304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.890438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.890517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.890542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.890616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.890749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.890777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.890859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.890955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.890982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.891067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.891169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.891196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.891325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.891438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.891462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.891567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.891663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.891701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.891822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.891914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.891942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.892035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.892152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.892179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.892327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.892415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.892439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.892520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.892607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.892633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.892784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.892863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.892890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.893017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.893136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.893164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.893262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.893342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.893366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.893466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.893571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.893601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.893692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.893808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.893852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.893943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.894049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.894075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.894180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.894259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.894284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.894404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.894509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.894536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.894627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.894723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.894750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.894872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.894967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.895010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.895095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.895174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.895200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.895313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.895420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.895448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.895544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.895631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.895659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.895796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.895894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.895928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.896042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.896155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.896180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.896289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.896367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.896393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.956 [2024-11-17 19:39:06.896537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.896650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.956 [2024-11-17 19:39:06.896683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.956 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.896793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.896886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.896915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.897044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.897118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.897143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.897252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.897383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.897410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.897527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.897621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.897648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.897783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.897874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.897902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.898015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.898122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.898147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.898257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.898359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.898392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.898519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.898613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.898641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.898785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.898865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.898891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.898976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.899058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.899082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.899168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.899252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.899276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.899424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.899512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.899537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.899646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.899741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.899768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.899894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.899981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.900006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.900124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.900206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.900231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.900314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.900394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.900420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.900563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.900707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.900737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.900855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.900969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.900995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.901104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.901214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.901239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.901376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.901499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.901525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.901636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.901730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.901757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.901842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.901956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.901982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.902119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.902249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.902276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.902376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.902496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.902525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.902620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.902726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.902756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.902865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.902976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.903001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.903132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.903228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.903258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.957 [2024-11-17 19:39:06.903388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.903563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.957 [2024-11-17 19:39:06.903589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.957 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.903706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.903825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.903851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.903932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.904044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.904070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.904182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.904317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.904346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.904431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.904513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.904541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.904638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.904757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.904785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.904922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.905107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.905310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.905578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.905860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.905969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.906081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.906181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.906208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.906332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.906452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.906479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.906565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.906697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.906724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.906816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.906927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.906952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.907027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.907112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.907138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.907227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.907333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.907361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.907460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.907557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.907585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.907694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.907772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.907797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.907881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.907993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.908018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.908106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.908223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.908265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.908399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.908545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.908573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.908684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.908822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.908846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.908933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.909075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.909103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.909223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.909352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.909376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.909509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.909602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.909631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.909776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.909859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.909884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.909987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.910113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.910141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.910291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.910383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.910411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.958 qpair failed and we were unable to recover it. 00:30:08.958 [2024-11-17 19:39:06.910541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.958 [2024-11-17 19:39:06.910664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.910700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.910836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.910926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.910951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.911075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.911193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.911222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.911356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.911445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.911471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.911549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.911629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.911654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.911766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.911888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.911917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.912011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.912153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.912198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.912310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.912460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.912504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.912594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.912683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.912710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.912802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.912892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.912918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.913004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.913223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.913438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.913641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.913846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.913955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.914063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.914150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.914176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.914265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.914348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.914374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.914519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.914632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.914658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.914782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbfe00 is same with the state(5) to be set 00:30:08.959 [2024-11-17 19:39:06.914930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.915053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.915080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.915189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.915302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.915329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.915459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.915545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.915569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.915695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.915808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.915834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.915919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.916018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.916046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.916153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.916257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.916284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.959 qpair failed and we were unable to recover it. 00:30:08.959 [2024-11-17 19:39:06.916412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.959 [2024-11-17 19:39:06.916504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.916533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.916634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.916745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.916770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.916879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.916978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.917002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.917170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.917259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.917286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.917386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.917481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.917508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.917634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.917731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.917756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.917838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.917924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.917948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.918042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.918145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.918173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.918299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.918393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.918420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.918508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.918630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.918657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.918769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.918857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.918882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.918995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.919142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.919166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.919271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.919358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.919386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.919480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.919571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.919598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.919704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.919811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.919835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.919911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.920008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.920033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.920119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.920241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.920268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.920382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.920521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.920548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.920636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.920752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.920779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.920887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.920994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.921037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.921150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.921231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.921255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.921417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.921507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.921535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.921630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.921778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.921803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.921881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.921953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.921977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.922064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.922172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.922200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.922328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.922416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.922442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.922543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.922672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.922713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.922844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.922942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.922966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.923043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.923167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.960 [2024-11-17 19:39:06.923195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.960 qpair failed and we were unable to recover it. 00:30:08.960 [2024-11-17 19:39:06.923293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.923438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.923466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.923565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.923647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.923683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.923822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.923899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.923924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.924041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.924145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.924170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.924301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.924442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.924467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.924573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.924705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.924748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.924864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.924948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.924975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.925066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.925156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.925180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.925316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.925414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.925445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.925567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.925656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.925692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.925790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.925902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.925927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.926064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.926158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.926187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.926326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.926440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.926464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.926552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.926649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.926682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.926778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.926874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.926901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.927004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.927110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.927135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.927257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.927381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.927406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.927496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.927585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.927610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.927721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.927825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.927850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.927940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.928022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.928046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.928123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.928235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.928259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.928381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.928501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.928526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.928655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.928769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.928798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.928916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.929045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.929069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.929160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.929272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.929297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.929398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.929545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.929572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.929660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.929757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.929785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.929895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.929986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.930011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.961 [2024-11-17 19:39:06.930116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.930197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.961 [2024-11-17 19:39:06.930222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.961 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.930355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.930474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.930502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.930640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.930782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.930809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.930936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.931061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.931089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.931228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.931368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.931393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.931478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.931551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.931575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.931692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.931797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.931825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.931914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.932030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.932058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.932184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.932294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.932319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.932445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.932564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.932592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.932712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.932800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.932827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.932949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.933038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.933063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.933163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.933278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.933306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.933424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.933511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.933540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.933652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.933753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.933779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.933863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.933978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.934003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.934110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.934259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.934287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.934427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.934504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.934529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.934622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.934718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.934747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.934834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.934921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.934948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.935059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.935146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.935170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.935254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.935375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.935400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.962 qpair failed and we were unable to recover it. 00:30:08.962 [2024-11-17 19:39:06.935489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.962 [2024-11-17 19:39:06.935561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.935586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.935740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.935823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.935849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.935932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.936107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.936135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.936281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.936403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.936431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.936529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.936605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.936630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.936760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.936853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.936880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.936969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.937090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.937118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.937248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.937337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.937361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.937459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.937614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.937642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.937786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.937893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.937922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.938005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.938093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.938117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.938193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.938308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.938332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.938471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.938561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.938590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.938690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.938800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.938824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.938915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.939041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.939068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.939152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.939269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.939295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.939433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.939526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.939550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.939635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.939784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.939812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.939904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.940013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.940040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.940178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.940290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.940314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.940400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.940502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.940529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.963 qpair failed and we were unable to recover it. 00:30:08.963 [2024-11-17 19:39:06.940633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.963 [2024-11-17 19:39:06.940780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.940809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.940914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.941036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.941061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.941144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.941256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.941282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.941393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.941498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.941525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.941628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.941756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.941783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.941907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.942003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.942030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.942154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.942242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.942270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.942393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.942477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.942501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.942661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.942791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.942818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.942936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.943083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.943116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.943255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.943373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.943399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.943505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.943606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.943659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.943791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.943931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.943956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.944078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.944174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.944198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.944333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.944418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.944446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.944547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.944721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.944747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.944835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.944916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.944941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.945035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.945121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.945145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.945231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.945351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.945378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.945828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.945951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.945978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.946094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.946217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.946259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.946374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.946486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.946512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.946600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.946709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.946735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.946812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.946897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.946921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.946997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.947080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.947105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.947185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.947270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.947294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.947386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.947494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.947518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.964 qpair failed and we were unable to recover it. 00:30:08.964 [2024-11-17 19:39:06.947632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.964 [2024-11-17 19:39:06.947801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.947826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.947900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.947991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.948015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.948143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.948259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.948291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.948382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.948463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.948490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.948656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.948757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.948781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.948873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.948975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.949002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.949132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.949257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.949281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.949368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.949493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.949517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.949629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.949759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.949784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.949889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.949967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.950002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.950184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.950270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.950295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.950385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.950469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.950496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.950597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.950719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.950766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.950860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.950952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.950977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.951073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.951157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.951197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.951313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.951444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.951469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.951551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.951639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.951670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.951772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.951857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.951882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.951984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.952079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.952107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.952244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.952330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.952354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.952437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.952540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.952567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.952649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.952791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.952816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.952904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.953012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.953037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.965 qpair failed and we were unable to recover it. 00:30:08.965 [2024-11-17 19:39:06.953130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.965 [2024-11-17 19:39:06.953228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.953257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.953348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.953497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.953525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.953653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.953788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.953814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.953930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.954041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.954069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.954164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.954283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.954310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.954415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.954522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.954547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.954654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.954787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.954816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.954935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.955039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.955066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.955175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.955284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.955309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.955398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.955505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.955533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.955626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.955769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.955798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.955919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.956009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.956034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.956127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.956273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.956300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.956395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.956510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.956538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.956666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.956757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.956784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.956914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.957127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.957378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.957587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.957815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.957981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.958095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.958177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.958202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.958310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.958425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.958452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.958579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.958669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.958707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.966 [2024-11-17 19:39:06.958816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.958896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.966 [2024-11-17 19:39:06.958921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.966 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.959092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.959188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.959215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.959339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.959425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.959453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.959602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.959726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.959752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.959831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.959916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.959941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.960050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.960167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.960194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.960297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.960411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.960435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.960516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.960640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.960691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.960818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.960907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.960935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.961067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.961176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.961202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.961292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.961407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.961436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.961530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.961634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.961659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.961774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.961855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.961880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.961962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.962054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.962080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.962171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.962280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.962305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.962383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.962491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.962516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.962626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.962715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.962741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.962840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.962987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.963014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.963122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.963231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.963255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.963358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.963482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.963510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.963630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.963753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.963781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.963904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.964111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.964376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.964591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.964784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.964958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.965049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.965128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.965153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.965232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.965313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.965339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.965422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.965551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.965579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.967 qpair failed and we were unable to recover it. 00:30:08.967 [2024-11-17 19:39:06.965686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.965791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.967 [2024-11-17 19:39:06.965815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.965891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.966115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.966293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.966512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.966767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.966913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.967019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.967099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.967126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.967262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.967345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.967370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.967479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.967570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.967610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.967736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.967824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.967852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.968003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.968206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.968399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.968651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.968856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.968974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.969067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.969179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.969206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.969331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.969416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.969441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.969568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.969657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.969701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.969801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.969899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.969927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.970073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.970164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.970189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.970269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.970381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.970406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.970498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.970604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.970629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.968 [2024-11-17 19:39:06.970763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.970833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.968 [2024-11-17 19:39:06.970857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.968 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.970935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.971122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.971372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.971580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.971819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.971947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.972082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.972159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.972184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.972261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.972394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.972427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.972550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.972650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.972701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.972789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.972875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.972899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.972990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.973133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.973160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.973257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.973343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.973370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.973470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.973545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.973570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.973648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.973744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.973769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.973876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.973987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.974014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.974116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.974252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.974277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.974369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.974460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.974489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.974578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.974751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.974779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.974877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.974967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.974992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.975096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.975183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.975225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.975325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.975471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.975499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.975603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.975712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.975737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.975812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.975914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.975942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.976036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.976162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.976189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.976284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.976365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.976389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.976470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.976596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.976623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.976742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.976866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.976893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.977008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.977130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.977155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.969 [2024-11-17 19:39:06.977287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.977377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.969 [2024-11-17 19:39:06.977405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.969 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.977517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.977638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.977666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.977835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.977921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.977947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.978028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.978126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.978155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.978256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.978346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.978376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.978519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.978633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.978657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.978755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.978909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.978937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.979031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.979153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.979181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.979314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.979397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.979422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.979551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.979648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.979682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.979780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.979873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.979900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.980007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.980115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.980140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.980283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.980376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.980404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.980493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.980586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.980614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.980729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.980819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.980844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.980962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.981044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.981088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.981184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.981269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.981297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.981438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.981548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.981572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.981734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.981832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.981860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.982016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.982135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.982162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.982289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.982366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.982396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.982524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.982611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.982638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.982727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.982847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.982874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.982980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.983068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.983094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.970 [2024-11-17 19:39:06.983209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.983298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.970 [2024-11-17 19:39:06.983323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.970 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.983395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.983509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.983534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.983687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.983772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.983797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.983934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.984025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.984054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.984152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.984273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.984301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.984446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.984564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.984590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.984726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.984815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.984848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.984941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.985060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.985087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.985220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.985314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.985339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.985430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.985530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.985558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.985655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.985809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.985838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.985968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.986177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.986445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.986664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.986856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.986953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.987033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.987112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.987156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.987263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.987348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.987374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.987461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.987564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.987592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.987731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.987820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.987848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.987954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.988045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.988070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.988148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.988252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.988279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.988399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.988498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.971 [2024-11-17 19:39:06.988525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.971 qpair failed and we were unable to recover it. 00:30:08.971 [2024-11-17 19:39:06.988640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.988742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.988767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.988878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.989150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.989331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.989524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.989833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.989951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.990043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.990156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.990181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.990267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.990404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.990446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.990524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.990636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.990661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.990750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.990823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.990848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.990935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.991124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.991335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.991547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.991775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.991928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.992026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.992134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.992158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.992246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.992377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.992404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.992525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.992603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.992631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.992741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.992822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.992847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.992930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.993181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.993418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.993606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.993848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.993961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.994090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.994195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.994224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.994332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.994453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.994482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.994606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.994705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.994733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.994872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.994960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.994985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.972 [2024-11-17 19:39:06.995105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.995252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.972 [2024-11-17 19:39:06.995277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.972 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.995362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.995445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.995470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.995546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.995652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.995681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.995813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.995929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.995956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.996044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.996130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.996159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.996297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.996388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.996413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.996538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.996628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.996656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.996747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.996839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.996867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.996973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.997083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.997108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.997217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.997317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.997345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.997441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.997526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.997553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.997714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.997791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.997817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.997947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.998068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.998094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.998187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.998313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.998340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.998462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.998597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.998622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.998733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.998814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.998839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.998922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.999033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.999057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.999172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.999258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.999283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.999382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.999466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.999493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.999641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.999739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.999768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:06.999902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:06.999988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.000013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:07.000094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.000197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.000225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:07.000396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.000482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.000506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:07.000577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.000662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.000694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:07.000810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.000908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.000943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:07.001050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.001178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.001212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.973 qpair failed and we were unable to recover it. 00:30:08.973 [2024-11-17 19:39:07.001326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.001427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.973 [2024-11-17 19:39:07.001463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.001567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.001722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.001761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.001936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.002084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.002112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.002204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.002320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.002346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.002428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.002524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.002552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.002670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.002823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.002858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.002961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.003091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.003126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.003231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.003343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.003383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.003513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.003690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.003730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.003914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.004016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.004043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.004153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.004276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.004304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.004404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.004501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.004529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.004651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.004770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.004795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.004880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.004986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.005029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.005134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.005288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.005327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.005455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.005610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.005645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.005795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.005919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.005954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.006098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.006208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.006246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.006441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.006529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.006556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.006666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.006759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.006788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.006916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.007000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.007028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.007168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.007258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.007301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.007438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.007560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.007598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.007749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.007897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.974 [2024-11-17 19:39:07.007936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.974 qpair failed and we were unable to recover it. 00:30:08.974 [2024-11-17 19:39:07.008094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.008197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.008231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.008391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.008535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.008567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.008655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.008777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.008802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.008912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.008992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.009017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.009117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.009246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.009271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.009410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.009529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.009567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.009697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.009800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.009834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.010024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.010133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.010168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.010276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.010422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.010456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.010557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.010654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.010686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.010787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.010873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.010901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.011017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.011119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.011144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.011250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.011343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.011378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.011489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.011611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.011645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.011805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.011921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.011959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.012090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.012244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.012278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.012427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.012570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.012610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.012714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.012844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.012869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.013014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.013136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.013161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.013258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.013374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.013412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.013567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.013711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.013751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.013878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.013977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.014012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.014133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.014305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.014343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.014502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.014635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.014670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.975 qpair failed and we were unable to recover it. 00:30:08.975 [2024-11-17 19:39:07.014826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.014959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.975 [2024-11-17 19:39:07.014993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.015157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.015274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.015302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.015390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.015483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.015511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.015624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.015731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.015757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.015903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.016034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.016059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.016144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.016226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.016250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.016383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.016488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.016522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.016658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.016811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.016850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.016971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.017149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.017184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.017319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.017448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.017483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.017620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.017775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.017807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.017903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.018005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.018032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.018162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.018254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.018279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.018387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.018467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.018492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.018618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.018739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.018784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.018916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.019018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.019052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.019169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.019307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.019346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.019500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.019621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.019659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.019816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.019920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.019954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.020084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.020160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.020203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.020333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.020465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.020493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.020591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.020683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.020710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.020806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.020928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.020981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.021097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.021201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.021239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.021391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.021518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.021559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.021719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.021868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.021902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.022059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.022193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.022235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.022387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.022476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.022501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.976 qpair failed and we were unable to recover it. 00:30:08.976 [2024-11-17 19:39:07.022590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.976 [2024-11-17 19:39:07.022711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.022738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.022831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.022913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.022938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.023033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.023138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.023172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.023327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.023469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.023507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.023703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.023828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.023863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.023995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.024100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.024134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.024266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.024382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.024410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.024506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.024606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.024634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.024746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.024834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.024858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.024990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.025131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.025169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.025323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.025432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.025470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.025628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.025782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.025818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.025949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.026064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.026102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.026248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.026374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.026403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.026496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.026586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.026611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.026698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.026812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.026837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.026952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.027075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.027112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.027246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.027351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.027385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.027538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.027659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.027708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.027825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.027960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.027999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.028148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.028246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.028280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.028436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.028537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.028567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.028669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.028779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.028808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.028945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.029030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.029055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.029142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.029286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.029325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.029464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.029607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.029644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.029802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.029903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.029937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.030118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.030263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.030302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.977 [2024-11-17 19:39:07.030476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.030574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.977 [2024-11-17 19:39:07.030603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.977 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.030711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.030791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.030816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.030964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.031062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.031092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.031204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.031283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.031308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.031411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.031486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.031511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.031593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.031742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.031782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.031925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.032058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.032098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.032256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.032387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.032421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.032572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.032690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.032729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.032838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.032961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.032996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.033113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.033219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.033244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.033372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.033476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.033503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.033593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.033745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.033784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.033945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.034044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.034078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.034229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.034371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.034411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.034584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.034692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.034731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.034920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.035044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.035071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.035240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.035336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.035364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.035511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.035630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.035658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.035764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.035849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.035873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.035969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.036048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.036081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.036189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.036301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.036339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.036471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.036596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.036631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.036785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.036933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.036971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.037081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.037220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.037258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.037383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.037542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.037569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.037690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.037808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.037849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.037966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.038055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.038082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.038172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.038305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.038338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.978 [2024-11-17 19:39:07.038463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.038569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.978 [2024-11-17 19:39:07.038607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.978 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.038750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.038893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.038932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.039081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.039205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.039240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.039424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.039567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.039594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.039703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.039808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.039849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.039955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.040041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.040066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.040155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.040253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.040278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.040353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.040470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.040497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.040663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.040803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.040854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.040973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.041077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.041115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.041290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.041435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.041474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.041628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.041776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.041813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.041921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.042014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.042042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.042168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.042303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.042330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.042470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.042558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.042584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.042678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.042751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.042777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.042891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.043036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.043071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.043171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.043299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.043333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.043450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.043563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.043602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.043751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.043888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.043926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.044079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.044207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.044234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.044371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.044468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.044496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.044578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.044663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.044700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.044824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.044912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.044946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.045105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.045261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.045295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.045404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.045544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.045584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.045722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.045880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.045914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.979 qpair failed and we were unable to recover it. 00:30:08.979 [2024-11-17 19:39:07.046083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.046190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.979 [2024-11-17 19:39:07.046231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.046359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.046476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.046504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.046614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.046722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.046747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.046876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.046995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.047023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.047118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.047214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.047243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.047326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.047424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.047459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.047590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.047705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.047744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.047886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.048055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.048094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.048273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.048399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.048434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.048571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.048659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.048697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.048825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.048958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.048985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.049073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.049156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.049181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.049318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.049445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.049483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.049624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.049774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.049813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.049979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.050104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.050139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.050308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.050442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.050474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.050632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.050744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.050774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.050889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.051131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.051353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.051585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.051817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.051937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.052063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.052182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.052220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.052379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.052501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.052535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.052695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.052809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.052848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.052967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.053136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.053174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.053302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.053392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.053419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.980 qpair failed and we were unable to recover it. 00:30:08.980 [2024-11-17 19:39:07.053505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.053589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.980 [2024-11-17 19:39:07.053615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.053709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.053850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.053879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.054008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.054143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.054168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.054255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.054341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.054366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.054483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.054631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.054656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.054745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.054825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.054858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.055012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.055146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.055199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.055313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.055438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.055474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.055607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.055697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.055723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.055825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.055905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.055930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.056082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.056185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.056211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.056297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.056374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.056399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.056539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.056643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.056695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.056799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.056927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.056955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.057060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.057146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.057171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.057284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.057393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.057421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.057569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.057700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.057730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.057838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.057924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.057949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.058038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.058148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.058180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.058313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.058441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.058466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.058606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.058694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.058719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.058824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.058902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.058931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.059053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.059188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.059213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.059353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.059463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.059489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.059607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.059703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.059733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.059819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.059904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.059932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.060072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.060186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.060211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.060311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.060464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.060492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.060583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.060744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.060774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.981 qpair failed and we were unable to recover it. 00:30:08.981 [2024-11-17 19:39:07.060884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.981 [2024-11-17 19:39:07.060994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.061018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.061150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.061238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.061265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.061383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.061479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.061507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.061614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.061718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.061745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.061829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.061962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.061991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.062086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.062210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.062238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.062392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.062504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.062529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.062672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.062801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.062829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.062982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.063128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.063156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.063297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.063414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.063439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.063595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.063708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.063735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.063822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.063956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.063984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.064112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.064226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.064251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.064354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.064440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.064482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.064581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.064678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.064707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.064819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.064900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.064925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.065002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.065082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.065108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.065240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.065354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.065379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.065482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.065593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.065618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.065702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.065825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.065850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.065986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.066102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.066130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.066258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.066375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.066399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.066526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.066609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.066638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.066769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.066861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.066886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.066970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.067057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.067081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.067192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.067329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.067357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.067475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.067592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.067620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.067758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.067847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.067871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.067995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.068112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.982 [2024-11-17 19:39:07.068140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.982 qpair failed and we were unable to recover it. 00:30:08.982 [2024-11-17 19:39:07.068273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.068387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.068412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.068521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.068614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.068640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.068741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.068867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.068896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.068981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.069101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.069128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.069262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.069374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.069400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.069525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.069618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.069646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.069784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.069872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.069897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.070010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.070116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.070141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.070291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.070419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.070444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.070531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.070669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.070701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.070784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.070888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.070914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.071002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.071082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.071113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.071257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.071379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.071407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.071539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.071686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.071712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.071839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.071954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.071982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.072069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.072159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.072188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.072320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.072430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.072455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.072626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.072749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.072778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.072864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.072991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.073018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.073151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.073263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.073288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.073412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.073534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.073562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.073705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.073827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.073855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.073990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.074129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.074154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.074320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.074442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.074470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.074600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.074688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.074714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.074822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.074925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.074949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.075053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.075170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.075198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.075287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.075440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.075468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.983 [2024-11-17 19:39:07.075602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.075703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.983 [2024-11-17 19:39:07.075728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.983 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.075812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.075967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.075995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.076095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.076209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.076234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.076318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.076455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.076480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.076597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.076730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.076758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.076852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.076976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.077003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.077140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.077279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.077304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.077445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.077578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.077603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.077717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.077874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.077903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.078043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.078136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.078160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.078274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.078411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.078436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.078526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.078607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.078632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.078754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.078843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.078868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.078948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.079039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.079064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.079203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.079352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.079379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.079512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.079639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.079664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.079757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.079896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.079924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.080049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.080171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.080198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.080299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.080390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.080415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.080524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.080653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.080688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.080782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.080910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.080936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.081051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.081132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.081157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.081296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.081426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.081453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.081609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.081751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.081785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.081867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.081980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.082005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.082115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.082241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.984 [2024-11-17 19:39:07.082269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.984 qpair failed and we were unable to recover it. 00:30:08.984 [2024-11-17 19:39:07.082414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.082496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.082524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.082681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.082762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.082787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.082890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.083006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.083049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.083152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.083266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.083294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.083396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.083505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.083530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.083640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.083771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.083799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.083965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.084079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.084104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.084243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.084360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.084385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.084487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.084604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.084636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.084753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.084836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.084861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.084983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.085090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.085115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.085211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.085329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.085356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.085449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.085584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.085610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.085704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.085818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.085843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.085952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.086062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.086087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.086197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.086349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.086377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.086476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.086549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.086574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.086698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.086838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.086863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.086988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.087110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.087138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.087276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.087349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.087373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.087450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.087546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.087574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.087700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.087819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.087847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.087983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.088060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.088084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.088244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.088368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.088396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.088518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.088672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.088702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.088809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.088893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.088918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.089076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.089222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.089250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.089391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.089553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.089578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.089656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.089819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.985 [2024-11-17 19:39:07.089845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.985 qpair failed and we were unable to recover it. 00:30:08.985 [2024-11-17 19:39:07.089992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.090079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.090108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.090230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.090352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.090380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.090543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.090627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.090651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.090815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.090910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.090938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.091062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.091160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.091188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.091316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.091451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.091476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.091604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.091772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.091800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.091924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.092051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.092079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.092185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.092327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.092352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.092466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.092614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.092642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.092783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.092872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.092897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.092988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.093103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.093128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.093238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.093352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.093377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.093487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.093608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.093636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.093757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.093874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.093899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.093999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.094071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.094096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.094201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.094309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.094334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.094448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.094583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.094607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.094718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.094877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.094903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.094998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.095112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.095137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.095244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.095357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.095386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.095501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.095607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.095636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.095812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.095896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.095921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.096015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.096151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.096176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.096308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.096421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.096449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.096602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.096722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.096751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.096908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.097003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.097028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.097139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.097225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.097268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.986 [2024-11-17 19:39:07.097356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.097483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.986 [2024-11-17 19:39:07.097511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.986 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.097616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.097764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.097790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.097896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.098024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.098052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.098135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.098266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.098294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.098431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.098570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.098594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.098762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.098880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.098908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.099002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.099121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.099165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.099279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.099387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.099412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.099543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.099640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.099685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.099822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.099930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.099958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.100085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.100195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.100220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.100373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.100494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.100523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.100616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.100743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.100772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.100882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.100960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.100993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.101105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.101218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.101243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.101328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.101432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.101458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.101584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.101659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.101698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.101783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.101864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.101889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.101988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.102086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.102114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.102249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.102359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.102383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.102468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.102584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.102609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.102759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.102898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.102924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.103006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.103132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.103158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.103293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.103404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.103432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.103557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.103719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.103749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.103876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.103995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.104020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.104157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.104283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.104310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.104437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.104555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.104583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.104700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.104783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.104808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-11-17 19:39:07.104882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.104967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-11-17 19:39:07.104992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.105097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.105201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.105226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.105309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.105421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.105446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.105532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.105638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.105662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.105788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.105902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.105927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.106093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.106231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.106256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.106359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.106474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.106502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.106631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.106778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.106807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.106913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.107026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.107051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.107161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.107286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.107313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.107425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.107571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.107599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.107734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.107818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.107843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.107957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.108090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.108118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.108240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.108367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.108393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.108532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.108638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.108683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.108802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.108945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.108970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.109059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.109138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.109163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.109267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.109352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.109377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.109516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.109678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.109706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.109854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.109950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.109978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.110084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.110161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.110186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.110297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.110428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.110456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.110545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.110635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.110663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.110847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.110966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.110991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.111126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.111251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.111279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.111413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.111532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.111560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.111700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.111807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.111832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.111946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.112054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.112081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.112203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.112327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.112355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-11-17 19:39:07.112485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.112601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-11-17 19:39:07.112627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.112753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.112885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.112912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.112999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.113118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.113146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.113254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.113340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.113366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.113486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.113615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.113643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.113806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.114003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.114030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.114174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.114314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.114339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.114424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.114534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.114559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.114645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.114785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.114827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.114905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.115014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.115043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.115169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.115311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.115338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.115432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.115588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.115616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.115752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.115939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.115991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.116080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.116205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.116233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.116326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.116409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.116436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.116568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.116658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.116690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.116798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.116905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.116933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.117068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.117179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.117207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.117333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.117470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.117495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.117638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.117750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.117779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.117872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.117998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.118025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.118180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.118263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.118288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.118443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.118559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.118587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.118735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.118855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.118885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.119026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.119164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.119189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.119284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.119439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.119467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-11-17 19:39:07.119600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.119736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-11-17 19:39:07.119764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.119893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.119984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.120008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.120167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.120293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.120322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.120422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.120516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.120546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.120687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.120768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.120793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.120885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.120990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.121018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.121104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.121184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.121212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.121350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.121426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.121451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.121537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.121670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.121700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.121779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.121914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.121943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.122043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.122151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.122180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.122331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.122453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.122481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.122565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.122653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.122698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.122833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.122920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.122945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.123052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.123160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.123186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.123266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.123412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.123437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.123524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.123601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.123626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.123741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.123854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.123882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.124003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.124081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.124108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.124220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.124301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.124326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.124440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.124571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.124599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.124743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.124837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.124865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.124964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.125039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.125064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.125199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.125342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.125370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.125515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.125633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.125660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.125798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.125916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.125940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.126094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.126215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.126242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.126368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.126458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.126485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.126615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.126720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.126745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-11-17 19:39:07.126856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-11-17 19:39:07.126984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.127011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.127126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.127234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.127262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.127378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.127488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.127513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.127643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.127732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.127761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.127886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.128008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.128036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.128178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.128271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.128295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.128418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.128504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.128531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.128624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.128787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.128812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.128923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.129073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.129097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.129224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.129351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.129378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.129471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.129560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.129588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.129718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.129793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.129818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.129935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.130033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.130058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.130169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.130273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.130297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.130413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.130493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.130518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.130646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.130779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.130807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.130934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.131031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.131058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.131163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.131273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.131298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.131425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.131544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.131572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.131701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.131802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.131831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.131935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.132056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.132081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.132217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.132300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.132341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.132430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.132510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.132542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.132705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.132813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.132838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.132971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.133087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.133115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.133277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.133361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.133386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.133467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.133611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.133636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.133752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.133867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.133894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.133980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.134112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-11-17 19:39:07.134153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-11-17 19:39:07.134257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.134337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.134362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.134462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.134573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.134600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.134696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.134823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.134850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.134955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.135060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.135084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.135217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.135360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.135387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.135503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.135606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.135633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.135782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.135871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.135896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.135974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.136079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.136104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.136176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.136324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.136366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.136454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.136563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.136588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.136679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.136826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.136854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.136942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.137065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.137092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.137221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.137335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.137360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.137473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.137582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.137609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.137760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.137881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.137911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.138010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.138112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.138137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.138243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.138426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.138451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.138587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.138734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.138763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.138906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.138982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.139007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.139139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.139285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.139312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.139435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.139557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.139584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.139709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.139796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.139821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.139937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.140022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.140047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.140209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.140322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.140363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.140474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.140555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.140580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.140665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.140775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.140805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.140902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.140992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.141019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.141178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.141264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.141289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.141420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.141497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-11-17 19:39:07.141525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-11-17 19:39:07.141636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.141728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.141757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.141928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.142013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.142038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.142158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.142255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.142282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.142401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.142516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.142543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.142702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.142783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.142809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.142930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.143048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.143076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.143169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.143312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.143339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.143444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.143528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.143553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.143690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.143809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.143835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.143923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.144029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.144054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.144172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.144274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.144298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.144426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.144522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.144550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.144665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.144797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.144825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.144956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.145035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.145060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.145197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.145306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.145331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.145442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.145603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.145636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.145754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.145842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.145867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.145985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.146113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.146140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.146243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.146382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.146407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.146493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.146600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.146625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.146731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.146873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.146901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.146997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.147109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.147136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.147238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.147327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.147351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.147439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.147524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.147550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.147660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.147786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.147812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-11-17 19:39:07.147895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-11-17 19:39:07.148006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.148031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.148141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.148222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.148246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.148353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.148476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.148504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.148667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.148782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.148807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.148918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.149038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.149065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.149184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.149307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.149348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.149459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.149604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.149630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.149746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.149834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.149863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.149990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.150083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.150111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.150237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.150350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.150375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.150456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.150587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.150612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.150733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.150839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.150867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.150975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.151089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.151114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.151235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.151368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.151393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.151537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.151690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.151716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.151800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.151880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.151906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.152027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.152123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.152152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.152348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.152495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.152523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.152650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.152744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.152769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.152894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.152979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.153008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.153099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.153190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.153219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.153348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.153443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.153468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.153578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.153657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.153687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.153778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.153922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.153950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.154082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.154225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.154250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.154382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.154480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.154508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.154633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.154785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.154813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.154941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.155052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.155077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.155167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.155282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.155309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-11-17 19:39:07.155434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-11-17 19:39:07.155522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.155547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.155710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.155798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.155823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.155924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.156054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.156082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.156201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.156333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.156359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.156464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.156577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.156602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.156729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.156847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.156874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.156968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.157101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.157126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.157312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.157446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.157471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.157579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.157658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.157687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.157806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.157910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.157935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.158063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.158168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.158192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.158306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.158414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.158441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.158634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.158783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.158816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.158952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.159037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.159062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.159151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.159239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.159266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.159427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.159571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.159596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.159693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.159777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.159803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.159921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.160132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.160160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.160309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.160426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.160454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.160583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.160718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.160743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.160844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.160942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.160970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.161066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.161258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.161286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.161417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.161561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.161585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.161663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.161753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.161777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.161895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.162008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.162051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.162180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.162295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.162320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.162460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.162605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.162633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.162732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.162822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.162849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.162947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.163063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.163088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-11-17 19:39:07.163177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-11-17 19:39:07.163262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.163287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.163395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.163500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.163528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.163664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.163746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.163770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.163883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.164007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.164034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.164159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.164283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.164310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.164443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.164555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.164580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.164688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.164799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.164827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.164926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.165005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.165033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.165187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.165304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.165329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.165430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.165553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.165581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.165726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.165814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.165841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.165972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.166085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.166110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.166239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.166356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.166383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.166504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.166621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.166649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.166800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.166913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.166938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.167020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.167142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.167169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.167318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.167415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.167457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.167542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.167629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.167654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.167798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.167876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.167903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.168027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.168179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.168207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.168334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.168420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.168446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.168532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.168638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.168663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.168790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.168944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.168973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.169104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.169181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.169206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.169290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.169401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.169433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.169544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.169648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.169680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.169763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.169872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.169897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.169982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.170080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.170107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.170264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.170412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-11-17 19:39:07.170440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-11-17 19:39:07.170600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.170711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.170737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.170894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.171034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.171062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.171190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.171316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.171344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.171469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.171550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.171575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.171684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.171795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.171820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.171930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.172006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.172031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.172180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.172270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.172306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.172412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.172488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.172512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.172666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.172779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.172804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.172883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.172999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.173024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.173162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.173286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.173314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.173435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.173580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.173608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.173721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.173835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.173861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.173954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.174066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.174091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.174185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.174272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.174297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.174434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.174543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.174568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.174720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.174814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.174842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.174944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.175064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.175092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.175220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.175315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.175341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.175483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.175585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.175625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.175725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.175819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.175844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.175953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.176069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.176094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.176205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.176283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.176307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.176450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.176577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.176605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.176718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.176826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.176852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.176969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.177080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.177105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.177241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.177333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.177363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.177497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.177571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.177596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.177720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.177871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-11-17 19:39:07.177898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-11-17 19:39:07.177993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.178079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.178107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.178235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.178342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.178366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.178478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.178565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.178589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.178671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.178837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.178865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.178973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.179084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.179109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.179230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.179357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.179384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.179577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.179708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.179737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.179874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.180000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.180026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.180157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.180276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.180304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.180429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.180538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.180562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.180750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.180880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.180909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.181004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.181120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.181148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.181244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.181394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.181422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.181559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.181641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.181665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.181787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.181918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.181946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.182067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.182153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.182181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.182287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.182393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.182418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.182503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.182663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.182701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.182821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.182943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.182983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.183113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.183251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.183276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.183362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.183468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.183494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.183622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.183746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.183777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.183907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.183998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.184023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.184160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.184356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.184383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.184532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.184653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.184721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.184839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.184920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.184945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.185131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.185239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.185264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-11-17 19:39:07.185426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-11-17 19:39:07.185619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.185647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.185814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.185901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.185927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.186094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.186229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.186254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.186366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.186497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.186526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.186637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.186781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.186808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.186896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.187012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.187053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.187172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.187329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.187357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.187472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.187584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.187609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.187752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.187872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.187900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.188010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.188103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.188130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.188231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.188340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.188365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.188455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.188581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.188609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.188739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.188862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.188890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.189012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.189098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.189123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.189240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.189345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.189370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.189445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.189585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.189610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.189692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.189803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.189828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.189913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.190010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.190038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.190120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.190270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.190298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.190428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.190536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.190560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.190686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.190821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.190849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.190982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.191117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.191142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.191247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.191357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.191382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.191465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.191575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.191600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.191699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.191859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.191888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.192040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.192155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.192179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.192290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.192364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.192389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.192475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.192587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.192612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.192771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.192883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.192909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.193082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.193190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.193215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.193325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.193457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.193484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-11-17 19:39:07.193623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.193744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-11-17 19:39:07.193770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.193903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.194004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.194032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.194151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.194253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.194277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.194417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.194529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.194553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.194638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.194738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.194766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.194872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.195019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.195048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.195176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.195295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.195320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.195403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.195512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.195536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.195642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.195794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.195822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.195953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.196080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.196105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.196230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.196349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.196382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.196512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.196631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.196659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.196797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.196935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.196960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.197061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.197153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.197182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.197307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.197437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.197462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.197595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.197685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.197711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.197795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.197927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.197954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.198104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.198236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.198261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.198340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.198480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.198505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.198636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.198764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.198792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.198918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.199034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.199061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.199220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.199334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.199359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.199504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.199625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.199653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.199786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.199903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.199932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.200060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.200174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.200199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.200357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.200557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.200585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.200748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.200886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.200911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.200986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.201124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.201149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.201256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.201373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.201402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.201596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.201715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.201743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.201913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.202098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.202123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-11-17 19:39:07.202286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.202476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-11-17 19:39:07.202504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.001 [2024-11-17 19:39:07.202656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-11-17 19:39:07.202792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-11-17 19:39:07.202819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-11-17 19:39:07.202928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-11-17 19:39:07.203016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-11-17 19:39:07.203040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-11-17 19:39:07.203117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-11-17 19:39:07.203246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.203274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.203378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.203522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.203551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.203658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.203780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.203806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.203910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.204060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.204087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.204206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.204298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.204325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.204431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.204524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.204549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.204637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.204752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.204795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.204890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.205046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.205074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.205238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.205354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.205379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.205460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.205543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.205567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.205649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.205756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.205782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.205989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.206093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.206118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.206245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.206355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.206382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.206476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.206628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.206656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.206798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.206881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.206906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.207008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.207112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.207137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.207239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.207343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.207370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.207482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.207591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.207622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.207725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.207828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.207856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.207979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.208103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.208131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.208233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.208311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.208335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-11-17 19:39:07.208447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.208519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-11-17 19:39:07.208543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.208704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.208830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.208859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.208995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.209074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.209098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.209212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.209301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.209326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.209435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.209517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.209542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.209692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.209810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.209835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.209945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.210065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.210093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.210221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.210339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.210366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.210504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.210622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.210647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.210745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.210891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.210933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.211016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.211130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.211173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.211306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.211412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.211436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.211564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.211688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.211717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.211812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.211935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.211964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.212104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.212237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.212262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.212376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.212493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.212520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.212604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.212732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.212761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.212862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.213004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.213029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.213219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.213378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.213403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.213513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.213600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.213625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.213719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.213830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.213855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.213944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.214051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.214076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.214180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.214271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.214299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.214426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.214543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.214568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.214755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.214894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.214922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.215049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.215201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.215226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.215329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.215445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.215470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.215568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.215656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.215701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.215822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.215948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.215976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.216077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.216188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.216214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.216300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.216446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.216474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.216593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.216725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.216755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.216917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.217009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.217035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.217138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.217228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.217253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.217386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.217538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.217565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.217693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.217830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.217855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.218003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.218081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.218108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.218257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.218343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.218368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.218454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.218592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.218617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.218723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.218815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.218842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.218967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.219093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.219121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.219219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.219327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.219351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-11-17 19:39:07.219430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.219557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-11-17 19:39:07.219584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.219738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.219874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.219901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.220014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.220115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.220140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.220267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.220389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.220419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.220568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.220721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.220747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.220870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.220981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.221011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.221136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.221230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.221257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.221337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.221484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.221511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.221638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.221764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.221788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.221926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.222014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.222041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.222128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.222222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.222249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.222374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.222486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.222511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.222582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.222728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.222756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.222871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.222985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.223012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.223171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.223252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.223277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.223406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.223528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.223557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.223667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.223814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.223843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.223955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.224060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.224084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.224171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.224259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.224284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.224420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.224539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.224566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.224697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.224834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.224858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.224986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.225081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.225109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.225198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.225291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.225319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.225449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.225589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.225614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.225759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.225860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.225887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.226007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.226125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.226152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.226290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.226398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.226422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.226550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.226669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.226715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.226832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.226905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.226930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.227021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.227101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.227125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.227257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.227376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.227403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.227551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.227708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.227736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.227867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.227955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.227979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.228057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.228166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.228190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.228356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.228469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.228497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.228598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.228743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.228769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.228889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.229054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.229082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.229200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.229315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.229343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.229465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.229545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.229570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.229653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.229783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.229823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.229928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.230011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.230040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.230191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.230331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.230356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.230481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.230595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.230622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-11-17 19:39:07.230709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-11-17 19:39:07.230796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.230824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.230958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.231040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.231065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.231169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.231243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.231268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.231377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.231491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.231519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.231660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.231775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.231801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.231896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.232006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.232049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.232202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.232291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.232318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.232450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.232561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.232585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.232696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.232811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.232835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.232917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.233055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.233083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.233244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.233349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.233374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.233498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.233619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.233647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.233775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.233855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.233880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.233966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.234072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.234101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.234188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.234349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.234377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.234528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.234699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.234725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.234812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.234926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.234951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.235087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.235179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.235206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.235319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.235421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.235446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.235525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.235611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.235637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.235795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.235885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.235912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.236026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.236115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.236143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.236300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.236383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.236410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.236512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.236642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.236670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.236804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.236935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.236960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.237067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.237202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.237227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.237362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.237485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.237513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.237637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.237767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.237795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.237911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.237990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.238016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.238124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.238206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.238231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.238353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.238460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.238484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.238565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.238678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.238705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.238805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.238957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.238997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.239100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.239184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.239209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.239365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.239479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.239504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.239632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.239774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.239800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-11-17 19:39:07.239923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-11-17 19:39:07.240071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.240098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.240198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.240323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.240348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.240482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.240615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.240639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.240732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.240846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.240870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.241012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.241118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.241142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.241276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.241367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.241395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.241529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.241642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.241692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.241803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.241910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.241936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.242090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.242215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.242242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.242399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.242519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.242549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.242716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.242806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.242831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.242922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.243001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.243026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.243144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.243271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.243298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.243427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.243530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.243554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.243686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.243780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.243807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.243898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.243975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.244002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.244100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.244208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.244232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.244325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.244482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.244509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.244597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.244739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.244769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.244877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.244996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.245020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.245136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.245264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.245289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.245367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.245452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.245476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.245616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.245697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.245723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.245847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.245971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.245999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.246129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.246218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.246245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.246357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.246490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.246514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.246666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.246800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.246828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.246939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.247058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.247085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.247215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.247334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.247359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.247475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.247561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.247585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.247705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.247820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.247863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.247997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.248108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.248133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.248297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.248384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.248412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.248528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.248690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.248718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.248848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.248955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.248980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.249080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.249230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.249258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.249373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.249520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.249548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.249654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.249738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.249763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.249899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.250027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.250054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.250184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.250279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.250308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.250414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.250528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.250553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.250632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.250774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.250802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.250897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.250991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-11-17 19:39:07.251018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-11-17 19:39:07.251126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.251236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.251262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.251346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.251431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.251455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.251593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.251744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.251773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.251900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.252035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.252060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.252196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.252283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.252309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.252401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.252517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.252542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.252659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.252773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.252799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.252912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.253016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.253044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.253162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.253247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.253275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.253435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.253545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.253569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.253666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.253793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.253821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.253940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.254031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.254059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.254195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.254280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.254305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.254398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.254514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.254540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.254626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.254762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.254788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.254899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.255013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.255038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.255161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.255246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.255274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.255391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.255488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.255516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.255624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.255760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.255786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.255896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.255998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.256026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.256173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.256256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.256284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.256395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.256472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.256497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.256581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.256686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.256714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.256807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.256932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.256959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.257092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.257201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.257226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.257357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.257474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.257502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.257636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.257793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.257826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.257937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.258070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.258095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.258223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.258315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.258344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.258475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.258623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.258650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.258770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.258854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.258878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.258996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.259071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.259096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.259197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.259278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.259307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.259410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.259530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.259555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.259704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.259826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.259854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.259982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.260100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.260127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.260254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.260402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.260427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.260556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.260666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.260696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.260791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.260867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.260892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.261034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.261120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.261145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.261235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.261307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.261331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.261409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.261514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.261539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.261649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.261736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-11-17 19:39:07.261762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-11-17 19:39:07.261856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.261945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.261969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.262061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.262185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.262213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.262320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.262429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.262454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.262595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.262695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.262723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.262879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.263024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.263052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.263179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.263289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.263313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.263408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.263557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.263584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.263702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.263846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.263871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.264007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.264084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.264109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.264247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.264369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.264396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.264525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.264609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.264636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.264777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.264894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.264921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.265033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.265147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.265174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.265324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.265443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.265470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.265578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.265699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.265724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.265838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.265945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.265973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.266089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.266187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.266214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.266319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.266453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.266478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.266631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.266734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.266763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.266909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.267052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.267081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.267245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.267431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.267455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.267545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.267646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.267691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.267819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.267937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.267965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.268122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.268233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.268257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.268391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.268495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.268522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.268635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.268771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.268797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.268915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.269025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.269050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.269239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.269348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.269375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.269493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.269605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.269632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.269752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.269834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.269859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.269968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.270127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.270155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.270277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.270400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.270428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.270588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.270731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.270757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.270892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.271019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.271047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.271131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.271281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.271313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.271450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.271539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.271563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.271698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.271801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.271829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.271955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.272044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.272072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-11-17 19:39:07.272264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-11-17 19:39:07.272368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.272410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.272525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.272624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.272652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.272830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.272944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.272969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.273083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.273220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.273244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.273374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.273458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.273485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.273609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.273723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.273766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.273871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.273955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.273979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.274098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.274228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.274257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.274410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.274528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.274556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.274663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.274747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.274772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.274866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.275003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.275028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.275112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.275268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.275296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.275401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.275516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.275541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.275628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.275717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.275742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.275857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.275986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.276013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.276138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.276247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.276271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.276378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.276522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.276551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.276660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.276791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.276821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.276932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.277070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.277094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.277258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.277360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.277388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.277509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.277624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.277651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.277815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.277903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.277927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.278072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.278186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.278211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.278291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.278446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.278474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.278588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.278692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.278718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.278811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.278982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.279010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.279107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.279182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.279210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.279371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.279486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.279511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.279621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.279735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.279762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.279845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.279936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.279963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.280063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.280233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.280258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.280383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.280530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.280558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.280720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.280877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.280902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.280993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.281099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.281124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.281244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.281379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.281404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.281515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.281685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.281714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.281817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.281908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.281933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.282043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.282178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.282210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.282343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.282463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.282490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.282645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.282763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.282788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.282934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.283082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.283110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.283231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.283354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.283379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.283456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.283588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.283613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-11-17 19:39:07.283697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.283801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-11-17 19:39:07.283829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.283947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.284028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.284055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.284192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.284311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.284336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.284458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.284617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.284644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.284776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.284857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.284882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.284980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.285094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.285119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.285276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.285361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.285389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.285516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.285596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.285624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.285735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.285826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.285851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.285938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.286082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.286112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.286230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.286353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.286381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.286479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.286562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.286587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.286679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.286835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.286863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.286985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.287110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.287137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.287277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.287398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.287422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.287586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.287680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.287709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.287803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.287926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.287980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.288069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.288179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.288204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.288275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.288417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.288444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.288559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.288688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.288716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.288849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.288977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.289002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.289084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.289207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.289236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.289365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.289452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.289480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.289615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.289773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.289798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.289930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.290064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.290093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.290251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.290338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.290366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.290473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.290554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.290580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.290654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.290755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.290780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.290882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.291009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.291037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.291163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.291254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.291279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.291368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.291482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.291524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.291652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.291816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.291844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.292008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.292120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.292144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.292277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.292405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.292433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.292522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.292632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.292659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.292800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.292919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.292945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.293030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.293175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.293200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.293298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.293409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.293433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.293572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.293656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.293696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.293809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.293882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.293906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.293995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.294070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.294095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.294197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.294339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.294364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.294457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.294579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.294607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.305 [2024-11-17 19:39:07.294752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.294863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.305 [2024-11-17 19:39:07.294888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.305 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.294971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.295059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.295083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.295175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.295302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.295334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.295461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.295575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.295602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.295755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.295846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.295871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.295999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.296087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.296114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.296205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.296300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.296327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.296495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.296629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.296654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.296789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.296871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.296899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.296989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.297109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.297134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.297273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.297380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.297405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.297486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.297561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.297586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.297698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.297827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.297855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.297996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.298116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.298141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.298231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.298335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.298361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.298445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.298576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.298603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.298752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.298832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.298857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.298996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.299081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.299107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.299185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.299265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.299290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.299398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.299489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.299514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.299595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.299731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.299761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.299882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.300028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.300060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.300184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.300267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.300292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.300381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.300543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.300570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.300695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.300790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.300817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.300968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.301092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.301116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.301259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.301373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.301402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.301508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.301649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.301692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.301841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.301955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.301979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.302081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.302217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.302243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.302332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.302462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.302490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.302600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.302711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.302736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.302869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.302980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.303007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.303125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.303261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.303287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.303398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.303481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.303506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.303619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.303731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.303776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.303874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.304005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.304034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.304158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.304271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.304295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.304378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.304481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.304506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.306 [2024-11-17 19:39:07.304614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.304733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.306 [2024-11-17 19:39:07.304759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.306 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.304874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.304983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.305008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.305121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.305217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.305244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.305372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.305457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.305484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.305617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.305709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.305735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.305865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.305989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.306018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.306146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.306269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.306305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.306448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.306587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.306621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.306770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.306919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.306967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.307099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.307201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.307229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.307348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.307452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.307477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.307567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.307660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.307697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.307833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.307918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.307947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.308034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.308148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.308173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.308304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.308424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.308469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.308581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.308720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.308759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.308881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.308993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.309028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.309175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.309312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.309353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.309504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.309646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.309705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.309833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.309965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.310006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.310152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.310279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.310308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.310394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.310480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.310508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.310622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.310711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.310737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.310866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.310996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.311035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.311158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.311264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.311302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.311439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.311594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.311629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.311796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.311894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.311946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.312054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.312154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.312197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.312337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.312449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.312474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.312610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.312703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.312732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.312832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.312957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.312996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.313133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.313215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.313240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.313347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.313437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.313465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.313564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.313821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.313879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.314029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.314136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.314170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.314336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.314492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.314527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.314659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.314834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.314867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.314985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.315078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.315103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.315191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.315309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.315351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.315478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.315597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.315624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.315760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.315872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.315897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.316043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.316147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.316186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.316333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.316512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.316552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.307 [2024-11-17 19:39:07.316733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.316866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.307 [2024-11-17 19:39:07.316900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.307 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.317059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.317206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.317248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.317359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.317483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.317511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.317618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.317725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.317751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.317832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.317937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.317972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.318118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.318268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.318306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.318432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.318536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.318572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.318728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.318831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.318869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.319010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.319114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.319152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.319306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.319420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.319445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.319598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.319692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.319720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.319812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.319906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.319936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.320108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.320242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.320283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.320452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.320549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.320584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.320781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.320951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.320989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.321172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.321272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.321298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.321409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.321495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.321522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.321648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.321745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.321773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.321875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.321984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.322008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.322096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.322201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.322244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.322370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.322512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.322550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.322694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.322836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.322871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.322989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.323123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.323162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.323292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.323414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.323453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.323606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.323746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.323773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.323894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.323989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.324017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.324163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.324282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.324310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.324463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.324598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.324640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.324771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.324860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.324899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.325051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.325157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.325196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.325356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.325484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.325519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.325685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.325836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.325875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.326013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.326166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.326212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.326318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.326397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.326421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.326508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.326646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.326701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.326847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.326942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.326981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.327121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.327221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.327255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.327354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.327491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.327527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.327659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.327848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.327886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.328019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.328151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.328178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.328318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.328419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.328447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.328529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.328652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.328689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.328859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.328938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.328963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.329100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.329194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.308 [2024-11-17 19:39:07.329222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.308 qpair failed and we were unable to recover it. 00:30:09.308 [2024-11-17 19:39:07.329359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.329514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.329549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.329690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.329791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.329825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.329954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.330100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.330138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.330251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.330390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.330429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.330646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.330746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.330773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.330861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.330980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.331005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.331146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.331280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.331313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.331450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.331579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.331614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.331758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.331874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.331913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.332083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.332235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.332275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.332437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.332569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.332596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.332700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.332820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.332848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.332953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.333081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.333108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.333211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.333292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.333316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.333440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.333545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.333570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.333705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.333827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.333861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.333995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.334124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.334158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.334305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.334452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.334490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.334665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.334787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.334827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.334993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.335108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.335139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.335231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.335351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.335379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.335471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.335552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.335579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.335713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.335826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.335860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.336002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.336156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.336190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.336329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.336485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.336523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.336686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.336815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.336851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.336953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.337047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.337075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.337204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.337292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.337321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.337457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.337549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.337574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.337732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.337860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.337889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.338018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.338112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.338139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.338296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.338379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.338412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.338540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.338680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.338720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.338838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.339007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.339048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.339232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.339364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.339398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.339508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.339641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.339695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.339858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.340003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.340040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.340194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.340301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.340328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.340446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.340542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.340571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.309 [2024-11-17 19:39:07.340696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.340818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.309 [2024-11-17 19:39:07.340846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.309 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.340960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.341076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.341101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.341196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.341338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.341363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.341482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.341609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.341646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.341784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.341888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.341922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.342081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.342223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.342261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.342411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.342514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.342552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.342699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.342830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.342857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.342969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.343059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.343089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.343203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.343288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.343316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.343451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.343535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.343560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.343698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.343793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.343821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.343935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.344024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.344052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.344183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.344289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.344320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.344504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.344619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.344656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.344844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.344947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.344985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.345109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.345240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.345274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.345405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.345541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.345578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.345682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.345804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.345832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.345942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.346020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.346045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.346126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.346239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.346263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.346397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.346544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.346581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.346713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.346844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.346878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.347022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.347170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.347208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.347344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.347491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.347533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.347696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.347809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.347834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.347949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.348062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.348090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.348176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.348290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.348318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.348451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.348591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.348615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.348751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.348864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.348902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.349013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.349156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.349194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.349319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.349443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.349483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.349613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.349738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.349774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.349959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.350099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.350129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.350259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.350354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.350380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.350483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.350561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.350586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.350702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.350840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.350878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.351001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.351099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.351133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.351286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.351428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.351466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.351643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.351788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.351827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.351985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.352076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.352101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.352183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.352272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.352298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.310 [2024-11-17 19:39:07.352408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.352498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.310 [2024-11-17 19:39:07.352525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.310 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.352624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.352712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.352738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.352857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.352940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.352964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.353057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.353178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.353216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.353339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.353466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.353499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.353628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.353749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.353789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.353920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.354048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.354076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.354206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.354317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.354342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.354438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.354527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.354555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.354680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.354802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.354830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.354944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.355080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.355104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.355210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.355331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.355361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.355453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.355541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.355568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.355699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.355785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.355811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.355900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.356156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.356412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.356586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.356844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.356958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.357106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.357219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.357245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.357377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.357478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.357506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.357608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.357724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.357752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.357878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.357979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.358088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.358278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.358466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.358698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.358913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.358991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.359016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.359125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.359238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.359263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.359353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.359444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.359473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.359567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.359659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.359699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.359798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.359885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.359910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.360009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.360098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.360123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.360206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.360329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.360357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.360510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.360652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.360683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.360798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.360895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.360923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.361042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.361173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.361201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.361307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.361425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.361449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.361544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.361660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.361695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.361776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.361897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.361925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.362019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.362132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.362156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.362239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.362372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.362400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.362483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.362630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.362671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.362795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.362908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.362932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.311 qpair failed and we were unable to recover it. 00:30:09.311 [2024-11-17 19:39:07.363038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.311 [2024-11-17 19:39:07.363144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.363169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.363282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.363361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.363386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.363503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.363596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.363622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.363720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.363858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.363888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.363982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.364107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.364135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.364263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.364347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.364372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.364452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.364530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.364555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.364638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.364790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.364818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.364925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.365031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.365056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.365147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.365277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.365304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.365430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.365549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.365574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.365685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.365795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.365820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.365898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.366160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.366377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.366565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.366810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.366946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.367061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.367140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.367163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.367274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.367356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.367383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.367497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.367618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.367658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.367748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.367832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.367856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.367946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.368025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.368049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.368161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.368264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.368288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.368395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.368482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.368506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.368610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.368713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.368757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.368900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.369141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.369319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.369547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.369802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.369940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.370047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.370134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.370160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.370251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.370363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.370403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.370510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.370593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.370619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.370713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.370802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.370827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.370964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.371190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.371413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.371606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.371861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.371999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.372102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.372264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.372303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.372441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.372578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.312 [2024-11-17 19:39:07.372616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.312 qpair failed and we were unable to recover it. 00:30:09.312 [2024-11-17 19:39:07.372789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.372886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.372921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.373057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.373184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.373211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.373323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.373456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.373484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.373594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.373697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.373723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.373817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.373927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.373954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.374078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.374164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.374191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.374320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.374396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.374430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.374556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.374657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.374704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.374849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.374953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.374991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.375158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.375280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.375330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.375478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.375614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.375653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.375840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.375954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.375982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.376090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.376197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.376222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.376342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.376420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.376448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.376569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.376713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.376764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.376897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.377022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.377062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.377192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.377302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.377340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.377486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.377620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.377658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.377803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.377934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.377962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.378064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.378165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.378195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.378287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.378370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.378397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.378522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.378608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.378642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.378773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.378917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.378956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.379075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.379191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.379230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.379372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.379527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.379561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.379694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.379838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.379879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.380036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.380162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.380206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.380313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.380425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.380451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.380526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.380615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.380642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.380754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.380846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.380881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.381010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.381142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.381175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.381319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.381436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.381474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.381618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.381807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.381849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.381989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.382105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.382131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.382243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.382380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.382408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.382531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.382631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.382668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.382845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.382942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.382977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.383172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.383306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.383342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.383503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.383630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.383664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.383815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.383951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.383977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.384090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.384182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.384210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.313 qpair failed and we were unable to recover it. 00:30:09.313 [2024-11-17 19:39:07.384322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.313 [2024-11-17 19:39:07.384414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.384442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.384537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.384665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.384711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.384844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.384986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.385024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.385168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.385303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.385341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.385472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.385600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.385634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.385853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.385959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.385988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.386118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.386205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.386238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.386394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.386476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.386502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.386590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.386692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.386743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.386935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.387068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.387103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.387238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.387368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.387403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.387563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.387681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.387721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.387871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.387998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.388027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.388140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.388252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.388277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.388382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.388473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.388498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.388618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.388719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.388771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.388926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.389031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.389065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.389221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.389336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.389375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.389524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.389664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.389711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.389845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.389950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.389977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.390073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.390157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.390182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.390316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.390432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.390459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.390563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.390691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.390727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.390895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.391039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.391074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.391235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.391389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.391427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.391612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.391717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.391752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.391867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.391992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.392018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.392109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.392216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.392244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.392335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.392442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.392467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.392559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.392690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.392730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.392872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.393047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.393085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.393216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.393377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.393411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.393528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.393641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.393687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.393810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.393953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.393984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.394081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.394153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.394178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.394260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.394390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.394418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.394555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.394680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.394716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.394822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.394968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.395004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.395190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.395331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.395369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.395509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.395650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.395698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.395857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.395982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.396009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.396154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.396287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.396314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.396404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.396525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.396550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.396688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.396788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.314 [2024-11-17 19:39:07.396823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.314 qpair failed and we were unable to recover it. 00:30:09.314 [2024-11-17 19:39:07.396924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.397022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.397074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.397231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.397341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.397379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.397535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.397693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.397729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.397902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.398021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.398051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.398190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.398278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.398306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.398413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.398504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.398530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.398610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.398734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.398770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.398908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.399024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.399061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.399194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.399322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.399356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.399515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.399651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.399700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.399852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.399992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.400024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.400156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.400264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.400289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.400422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.400504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.400532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.400655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.400794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.400830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.400964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.401096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.401130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.401307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.401440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.401480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.401637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.401816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.401855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.401996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.402097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.402125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.402264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.402357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.402385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.402509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.402634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.402663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.402778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.402890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.402923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.403074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.403188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.403226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.403377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.403488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.403527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.403696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.403827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.403861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.404033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.404182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.404224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.404350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.404459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.404487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.404598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.404718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.404744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.404860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.404976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.405011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.405138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.405307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.405346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.405476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.405642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.405683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.405851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.405992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.406044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.406177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.406292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.406322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.406426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.406509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.406537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.406670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.406767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.406796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.406903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.406986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.407014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.407130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.407259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.407295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.407461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.407630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.407671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.407801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.407972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.408010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.408137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.408277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.408312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.408491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.408581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.408609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.408727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.408876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.408903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.409018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.409103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.409128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.409211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.409308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.409336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.315 qpair failed and we were unable to recover it. 00:30:09.315 [2024-11-17 19:39:07.409459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.315 [2024-11-17 19:39:07.409546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.409574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.409689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.409810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.409835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.409967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.410069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.410096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.410221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.410315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.410343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.410466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.410547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.410572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.410718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.410833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.410861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.410983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.411106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.411134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.411234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.411349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.411374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.411455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.411535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.411560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.411670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.411761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.411802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.411901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.412018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.412043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.412125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.412230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.412277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.412371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.412456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.412485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.412631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.412715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.412741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.412869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.412993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.413021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.413123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.413214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.413242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.413371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.413447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.413472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.413553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.413693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.413722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.413818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.413894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.413922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.414058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.414140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.414165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.414280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.414370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.414397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.414506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.414666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.414696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.414823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.414910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.414935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.415019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.415101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.415126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.415217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.415313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.415341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.415443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.415582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.415607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.415707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.415789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.415813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.415896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.416155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.416395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.416641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.416886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.416985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.417064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.417176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.417203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.417359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.417448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.417476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.417616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.417699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.417725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.417835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.417965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.417992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.418084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.418192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.418217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.418354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.418437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.418462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.418536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.418616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.418641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.418778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.418895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.316 [2024-11-17 19:39:07.418920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.316 qpair failed and we were unable to recover it. 00:30:09.316 [2024-11-17 19:39:07.419011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.419100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.419125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.419202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.419334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.419362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.419457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.419582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.419611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.419736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.419850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.419875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.420000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.420086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.420114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.420238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.420374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.420399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.420475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.420587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.420612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.420693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.420827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.420855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.420984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.421077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.421102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.421186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.421287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.421312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.421438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.421558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.421586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.421701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.421801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.421828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.421934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.422031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.422056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.422172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.422264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.422290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.422401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.422548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.422573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.422713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.422792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.422817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.422918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.423026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.423053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.423167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.423284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.423312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.423416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.423530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.423555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.423684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.423807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.423835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.423925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.424141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.424341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.424589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.424869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.424979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.425103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.425191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.425218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.425335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.425419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.425447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.425547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.425628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.425654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.425803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.425885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.425928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.426011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.426129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.426158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.426266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.426343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.426368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.426451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.426541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.426568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.426660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.426821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.426850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.426983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.427168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.427365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.427615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.427807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.427917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.428036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.428168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.428195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.428327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.428441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.428467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.428563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.428686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.428731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.428846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.428929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.428971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.429080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.429216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.429241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.317 qpair failed and we were unable to recover it. 00:30:09.317 [2024-11-17 19:39:07.429370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.317 [2024-11-17 19:39:07.429502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.429527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.429664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.429749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.429792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.429873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.429982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.430009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.430098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.430196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.430224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.430339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.430474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.430499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.430638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.430724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.430750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.430841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.430924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.430949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.431035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.431122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.431147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.431223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.431366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.431391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.431531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.431630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.431658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.431764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.431886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.431913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.432020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.432138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.432164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.432249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.432394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.432419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.432497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.432581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.432606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.432685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.432791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.432816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.432898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.433031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.433059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.433177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.433262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.433291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.433417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.433524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.433549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.433672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.433774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.433802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.433899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.433995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.434023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.434154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.434232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.434261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.434394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.434517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.434545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.434713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.434804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.434832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.434917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.435162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.435391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.435651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.435852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.435958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.436100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.436177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.436221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.436317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.436424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.436449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.436527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.436631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.436678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.436811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.436900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.436927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.437038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.437113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.437138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.437246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.437356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.437387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.437538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.437669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.437699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.437812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.437896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.437921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.438014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.438120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.438148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.438297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.438377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.438406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.438569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.438648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.438694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.438862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.438986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.439029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.439139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.439218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.439243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.439376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.439464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.439489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.439573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.439692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.439719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.318 qpair failed and we were unable to recover it. 00:30:09.318 [2024-11-17 19:39:07.439799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.439877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.318 [2024-11-17 19:39:07.439902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.440019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.440096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.440122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.440226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.440317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.440344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.440468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.440587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.440628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.440718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.440802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.440828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.440952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.441176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.441418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.441608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.441875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.441995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.442086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.442195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.442220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.442358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.442466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.442493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.442578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.442706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.442735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.442843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.442993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.443018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.443129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.443252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.443282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.443375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.443500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.443527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.443624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.443706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.443732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.443889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.443972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.443999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.444085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.444178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.444206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.444300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.444405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.444430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.444509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.444624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.444650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.444746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.444883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.444908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.444991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.445103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.445129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.445222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.445331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.445355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.445441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.445532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.445560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.445669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.445789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.445814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.445899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.446042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.446070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.319 qpair failed and we were unable to recover it. 00:30:09.319 [2024-11-17 19:39:07.446199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.319 [2024-11-17 19:39:07.446311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.446337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.446451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.446529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.446560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.446703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.446833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.446858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.446935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.447038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.447065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.447165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.447257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.447282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.447361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.447478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.447503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.447612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.447727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.447753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.447892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.447986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.448011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.448122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.448245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.448273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.448411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.448490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.448515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.448602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.448708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.448734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.448871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.449010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.449038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.449163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.449317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.449345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.449450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.449530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.449555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.449636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.449766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.449794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.449880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.450011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.450038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.450175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.450256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.450281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.450373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.450485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.450510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.450600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.450754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.450781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.450866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.450981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.451006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.451163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.451248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.451275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.451361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.451488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.451515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.451651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.451745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.451771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.451860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.451941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.451966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.452082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.452182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.452211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.452309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.452426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.452451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.452538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.452623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.452649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.452740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.452815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.452840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.452921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.453115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.453307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.453559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.453814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.453981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.454096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.454171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.454196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.454313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.454393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.454418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.454506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.454617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.454641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.454771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.454889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.454916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.455018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.455144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.455169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.455250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.455359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.455384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.455496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.455634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.455662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.455808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.455924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.455949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.456050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.456171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.456200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.320 qpair failed and we were unable to recover it. 00:30:09.320 [2024-11-17 19:39:07.456316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.320 [2024-11-17 19:39:07.456412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.456440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.456543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.456627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.456652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.456742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.456825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.456850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.456940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.457192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.457408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.457596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.457863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.457964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.458050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.458182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.458209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.458300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.458401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.458429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.458546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.458694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.458725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.458810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.458907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.458936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.459020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.459106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.459133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.459239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.459357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.459383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.459508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.459607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.459635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.459762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.459844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.459869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.459983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.460066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.460092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.460201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.460297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.460327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.460409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.460509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.460538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.460644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.460776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.460802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.460925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.461140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.461337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.461600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.461843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.461941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.462028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.462117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.462142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.462227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.462384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.462413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.462533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.462621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.462649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.462779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.462896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.462921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.463046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.463163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.463191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.463298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.463411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.463435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.463535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.463615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.463640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.463762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.463874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.463900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.463988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.464068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.464093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.464207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.464312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.464337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.464423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.464581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.464609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.464766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.464863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.464891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.465034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.465224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.465411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.465678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.465875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.465990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.466117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.466200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.466225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-11-17 19:39:07.466311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.466421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.321 [2024-11-17 19:39:07.466447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.466570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.466695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.466724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.466817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.466938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.466965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.467084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.467171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.467196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.467275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.467403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.467432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.467527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.467651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.467684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.467799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.467908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.467934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.468026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.468130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.468158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.468269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.468360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.468392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.468559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.468650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.468679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.468817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.468935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.468963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.469094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.469252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.469280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.469383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.469466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.469491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.469594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.469719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.469747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.469877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.470006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.470034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.470160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.470269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.470307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.470444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.470562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.470590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.470687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.470810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.470835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.470909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.470991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.471016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.471152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.471252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.471280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.471376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.471483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.471510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.471648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.471741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.471767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.471855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.471943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.471968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.472102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.472250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.472279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.472419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.472504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.472529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.472608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.472750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.472779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.472863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.472979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.473006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.473133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.473220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.473245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.473339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.473426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.473451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.473541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.473635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.473660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.473748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.473860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.473885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.473968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.474085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.474111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.474194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.474270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.474311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.474411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.474518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.474543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.474663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.474760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.474785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.474873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.475138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.475335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.475568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.475834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.475949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-11-17 19:39:07.476097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.476218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.322 [2024-11-17 19:39:07.476246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.476343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.476460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.476488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.476622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.476715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.476742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.476875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.477120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.477341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.477590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.477869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.477986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.478092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.478178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.478203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.478309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.478474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.478502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.478599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.478712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.478738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.478817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.478894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.478919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.479055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.479224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.479252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.479341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.479464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.479492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.479618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.479717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.479743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.479821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.479925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.479954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.480040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.480191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.480218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.480333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.480423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.480449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.480531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.480642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.480672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.480763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.480856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.480888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.481002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.481125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.481151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.481272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.481353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.481381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.481477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.481598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.481626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.481733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.481813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.481838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.481943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.482083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.482111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.482230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.482317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.482346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.482452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.482574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.482599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.482691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.482776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.482818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.482923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.483043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.483071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.483182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.483303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.483328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.483461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.483549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.483577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.483684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.483775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.483803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.483967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.484053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.484078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.484184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.484338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.484363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.484441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.484549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.484577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.484701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.484791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.484817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.484899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.485011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.485037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.485113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.485260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.485285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.485374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.485488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.485514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.485620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.485726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.485755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.485877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.485966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.486005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.486109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.486219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.486244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.486361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.486454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.486479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.486560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.486637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.323 [2024-11-17 19:39:07.486662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-11-17 19:39:07.486774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.486880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.486906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.487038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.487144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.487187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.487298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.487428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.487457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.487562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.487690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.487716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.487801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.487938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.487966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.488097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.488226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.488254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.488394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.488484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.488510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.488643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.488752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.488778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.488861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.488952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.488995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.489092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.489179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.489204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.489281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.489355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.489380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.489490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.489587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.489616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.489767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.489889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.489914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.490007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.490089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.490131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.490254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.490350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.490378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.490526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.490637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.490662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.490833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.490926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.490954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.491080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.491161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.491189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.491286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.491399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.491424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.491529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.491619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.491649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.491761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.491842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.491867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.491950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.492060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.492086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.492171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.492309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.492336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.492454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.492549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.492592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.492707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.492799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.492825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.492917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.493032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.493059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.493139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.493251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.493284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.493410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.493545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.493570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.493718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.493836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.493862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.494034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.494126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.494153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.494261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.494340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.494365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.494440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.494546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.494573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.494669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.494786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.494815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.494915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.495026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.495061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.495229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.495312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.495339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.495458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.495561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.495588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.495709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.495791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.495816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.495980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.496134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.496161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.496247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.496395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.496422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.496554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.496634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.496658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.496808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.496900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.496927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.497048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.497189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.497216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.497310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.497415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.497439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.324 qpair failed and we were unable to recover it. 00:30:09.324 [2024-11-17 19:39:07.497521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.497621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.324 [2024-11-17 19:39:07.497648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.497761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.497879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.497904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.498030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.498139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.498164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.498245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.498356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.498403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.498510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.498621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.498663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.498804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.498885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.498909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.499019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.499110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.499137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.499236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.499354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.499383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.499484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.499565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.499590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.499749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.499840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.499868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.499991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.500084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.500112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.500215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.500328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.500354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.500498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.500583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.500625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.500720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.500809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.500836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.501021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.501117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.501142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.501240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.501358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.501395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.501518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.501606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.501635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.501782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.501866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.501893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.501967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.502111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.502152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.502269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.502372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.502398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.502483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.502595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.502621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.502713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.502801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.502827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.502907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.503020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.503045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.503203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.503285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.503310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.503431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.503530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.503564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.503707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.503831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.503860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.504010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.504125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.504150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.504235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.504333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.504361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.504504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.504594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.504619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.504771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.504858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.504883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.504988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.505077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.505105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.505199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.505302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.505330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.505438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.505551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.505576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.505659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.505778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.505806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.505899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.506175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.506374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.506606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.506880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.506988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.325 [2024-11-17 19:39:07.507099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.507218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.325 [2024-11-17 19:39:07.507246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.325 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.507337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.507435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.507460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.507545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.507626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.507651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.507747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.507896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.507922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.508011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.508087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.508112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.508187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.508305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.508339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.508430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.508593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.508628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.508735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.508853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.508881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.508989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.509118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.509143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.509276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.509395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.509423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.509557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.509643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.509671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.509779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.509891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.509917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.510059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.510137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.510166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.510264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.510386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.510413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.510537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.510611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.510636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.510752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.510838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.510863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.510951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.511060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.511084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.511203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.511327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.511352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.511468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.511557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.511583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.511690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.511762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.511788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.511902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.512108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.512326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.512563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.512798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.512911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.513008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.513149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.513175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.513280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.513378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.513403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.513486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.513602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.513628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.513772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.513847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.513872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.513991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.514109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.514135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.514235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.514323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.514351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.514476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.514590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.514617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.514731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.514843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.514869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.514958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.515051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.515076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.515166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.515288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.515313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.515390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.515510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.515536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.515657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.515762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.515795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.515888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.516019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.516047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.516145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.516229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.516254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.516352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.516468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.516493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.516653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.516813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.516838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.516951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.517048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.517073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.517148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.517261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.517286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.517420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.517510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.517538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.326 qpair failed and we were unable to recover it. 00:30:09.326 [2024-11-17 19:39:07.517640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.326 [2024-11-17 19:39:07.517767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.517793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.517929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.518025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.518053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.518174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.518294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.518323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.518492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.518581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.518606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.518731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.518855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.518883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.519007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.519095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.519124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.519225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.519338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.519364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.519525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.519612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.519640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.519745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.519827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.519852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.519965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.520048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.520073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.520199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.520294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.520323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.520445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.520570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.520599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.520739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.520822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.520847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.520964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.521050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.521075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.521166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.521251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.521276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.521425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.521503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.521528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.521660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.521764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.521793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.521894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.521993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.522020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.522161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.522302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.522344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.522450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.522566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.522595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.522707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.522828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.522870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.522960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.523102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.523128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.523206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.523287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.523312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.523429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.523513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.523559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.523701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.523780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.523807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.523944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.524115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.524156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.524249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.524338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.524363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.524445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.524546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.524570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.524668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.524779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.524821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.524925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.525057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.525085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.327 [2024-11-17 19:39:07.525194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.525273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.327 [2024-11-17 19:39:07.525300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.327 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.525389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.525520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.525550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.610 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.525681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.525831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.525861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.610 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.525978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.526093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.526148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.610 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.526305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.526453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.526493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.610 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.526619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.526800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.526836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.610 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.526992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.527119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.527148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.610 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.527258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.527361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.527387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.610 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.527484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.527595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.527621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.610 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.527756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.527842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.527869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.610 qpair failed and we were unable to recover it. 00:30:09.610 [2024-11-17 19:39:07.527949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.610 [2024-11-17 19:39:07.528069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.528095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.528182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.528266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.528292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.528377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.528474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.528500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.528585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.528699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.528732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.528845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.528960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.528986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.529074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.529163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.529189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.529303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.529393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.529419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.529505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.529590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.529616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.529722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.529804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.529831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.529943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.530154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.530363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.530550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.530790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.530901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.531052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.531133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.531160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.531281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.531394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.531420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.531511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.531595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.531621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.531727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.531807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.531833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.531942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.532172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.532401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.532621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.532826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.532964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.533053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.533136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.533167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.533286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.533376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.533402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.533494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.533603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.533628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.533757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.533834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.533860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.533950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.534175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.534377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.534572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.534855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.534965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.535075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.535184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.535210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.535297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.535378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.535408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.535499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.535596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.535622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.535739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.535818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.535845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.535923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.536179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.536373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.536590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.536796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.536930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.537020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.537134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.537160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.537273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.537362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.537388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.537539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.537621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.537647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.537769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.537859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.537887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.537988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.538220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.538450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.538670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.611 [2024-11-17 19:39:07.538873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.611 [2024-11-17 19:39:07.538994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.611 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.539103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.539213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.539240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.539365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.539452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.539477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.539592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.539669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.539702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.539795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.539888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.539915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.540041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.540128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.540153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.540261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.540342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.540368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.540448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.540586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.540612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.540713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.540797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.540824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.540905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.541125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.541341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.541584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.541819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.541926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.542018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.542133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.542159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.542275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.542367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.542395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.542514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.542599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.542627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.542753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.542895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.542923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.543019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.543110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.543138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.543259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.543375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.543403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.543497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.543592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.543620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.543744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.543821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.543846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.543925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.544130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.544340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.544575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.544798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.544909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.545004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.545119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.545145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.545277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.545400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.545441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.545542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.545692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.545735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.545831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.545939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.545965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.546046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.546149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.546177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.546284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.546385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.546413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.546513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.546647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.546684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.546796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.546908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.546933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.547061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.547184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.547226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.547331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.547441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.547468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.547555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.547642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.547670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.547790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.547911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.547937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.548060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.548170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.548198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.548331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.548421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.548451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.548570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.548661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.548695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.548817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.548900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.548926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.549032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.549120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.549148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.549272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.549392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.549420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.549555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.549648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.549712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.549826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.549907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.549948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.550035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.550155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.550183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.550291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.550388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.550416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.550545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.550690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.550733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.550810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.550887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.550912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.551001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.551081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.551107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.551235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.551320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.551348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.612 [2024-11-17 19:39:07.551505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.551634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.612 [2024-11-17 19:39:07.551672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.612 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.551815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.551891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.551916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.552005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.552160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.552187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.552290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.552388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.552425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.552514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.552597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.552640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.552740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.552819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.552844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.552921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.553137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.553372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.553644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.553880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.553995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.554070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.554170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.554198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.554281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.554366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.554394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.554543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.554625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.554650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.554736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.554817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.554843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.554983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.555087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.555117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.555285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.555410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.555437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.555526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.555617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.555644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.555757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.555846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.555873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.555994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.556076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.556120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.556213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.556326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.556354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.556449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.556542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.556570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.556704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.556781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.556807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.556930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.557076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.557101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.557277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.557373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.557401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.557587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.557690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.557734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.557817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.557930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.557971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.558064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.558154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.558183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.558315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.558457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.558489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.558585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.558687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.558730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.558864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.558970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.558995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.559080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.559163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.559188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.559316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.559441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.559469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.559588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.559702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.559728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.559813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.559916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.559940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.560021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.560151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.560179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.560317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.560437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.560466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.560567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.560709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.560752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.560869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.560945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.560988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.561133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.561225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.561253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.561388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.561474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.561499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.561619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.561758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.561787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.561872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.561995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.562022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.562116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.562192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.562221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.562338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.562467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.562494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.562619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.562751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.562777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.562868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.562953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.562989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-11-17 19:39:07.563131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.613 [2024-11-17 19:39:07.563259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.563288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.563394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.563510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.563553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.563641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.563761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.563786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.563920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.564047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.564075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.564169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.564299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.564324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.564405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.564501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.564527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.564602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.564749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.564778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.564883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.564988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.565016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.565129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.565214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.565240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.565359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.565518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.565546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.565648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.565756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.565786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.565901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.566010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.566036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.566163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.566287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.566314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.566445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.566556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.566583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.566725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.566810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.566835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.566941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.567059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.567101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.567251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.567369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.567397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.567538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.567689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.567715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.567864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.567978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.568005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.568161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.568284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.568312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.568484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.568569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.568595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.568708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.568825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.568853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.568941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.569047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.569074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.569213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.569299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.569324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.569476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.569567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.569595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.569701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.569819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.569847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.569945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.570018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.570042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.570128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.570235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.570263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.570378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.570499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.570527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.570666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.570792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.570817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.570927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.571049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.571078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.571202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.571324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.571352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.571541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.571660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.571700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.571811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.571899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.571924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.572051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.572179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.572207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.572313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.572463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.572488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.572584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.572688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.572716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.572807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.572931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.572961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.573049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.573126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.573151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.573234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.573365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.573394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.573516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.573599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.573627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.573751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.573826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.573851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.573938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.574073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.574100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.574216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.574293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.574319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.574470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.574587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.574612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.574713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.574796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.574822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.574927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.575058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.575086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.575217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.575305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.575329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.575405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.575530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.575557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.575687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.575815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.575843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.575940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.576078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.576103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.576265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.576367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.576395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.576505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.576615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.614 [2024-11-17 19:39:07.576642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-11-17 19:39:07.576826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.576927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.576952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.577068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.577221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.577249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.577335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.577446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.577474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.577582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.577681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.577708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.577824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.577915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.577957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.578045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.578192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.578220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.578359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.578497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.578522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.578686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.578782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.578811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.578902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.579030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.579058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.579204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.579310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.579335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.579445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.579579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.579607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.579713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.579828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.579856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.579961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.580049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.580076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.580218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.580342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.580369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.580482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.580633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.580660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.580782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.580917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.580942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.581051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.581181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.581209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.581325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.581419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.581448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.581580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.581723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.581749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.581835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.581922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.581963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.582081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.582195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.582223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.582359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.582432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.582456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.582613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.582725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.582751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.582831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.582903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.582928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.583029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.583129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.583154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.583281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.583374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.583402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.583519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.583635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.583663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.583771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.583904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.583929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.584094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.584184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.584213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.584308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.584413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.584441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.584542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.584656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.584689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.584841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.584982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.585007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.585124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.585233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.585258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.585372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.585485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.585511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.585656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.585786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.585815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.585920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.586024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.586054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.586193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.586301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.586331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.586493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.586589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.586616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.586714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.586840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.586865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.586942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.587076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.587101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.587225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.587364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.587406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.587536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.587608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.587649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.587763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.587878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.587903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.588064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.588178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.588206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.588303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.588412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.588437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-11-17 19:39:07.588589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.588698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.615 [2024-11-17 19:39:07.588723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.588867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.588988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.589016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.589116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.589201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.589229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.589362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.589498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.589523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.589643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.589767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.589795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.589904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.590005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.590030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.590115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.590195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.590220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.590331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.590476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.590503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.590609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.590723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.590753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.590853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.590965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.591002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.591110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.591232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.591261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.591361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.591441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.591469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.591609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.591723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.591749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.591882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.591974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.592002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.592135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.592234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.592262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.592364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.592457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.592498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.592652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.592792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.592818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.592968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.593062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.593091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.593218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.593338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.593364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.593467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.593603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.593634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.593798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.593953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.593980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.594097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.594189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.594214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.594326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.594456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.594484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.594639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.594768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.594798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.594932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.595082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.595107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.595214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.595348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.595373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.595515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.595693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.595722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.595821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.595903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.595928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.596017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.596108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.596134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.596311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.596405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.596431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.596523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.596633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.596658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.596780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.596867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.596893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.596969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.597091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.597118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.597282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.597390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.597415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.597528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.597620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.597647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.597815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.597934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.597959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.598040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.598115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.598140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.598244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.598325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.598350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.598506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.598600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.598628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.598793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.598906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.598932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.599058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.599189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.599215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.599330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.599449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.599482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.599616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.599761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.599787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.599869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.600002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.600030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.616 qpair failed and we were unable to recover it. 00:30:09.616 [2024-11-17 19:39:07.600141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.600254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.616 [2024-11-17 19:39:07.600282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.600407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.600494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.600519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.600603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.600715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.600744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.600841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.600991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.601019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.601126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.601265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.601291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.601414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.601547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.601573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.601734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.601862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.601889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.602023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.602132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.602158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.602263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.602363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.602392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.602487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.602648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.602681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.602815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.602926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.602952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.603072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.603182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.603210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.603309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.603433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.603461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.603600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.603731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.603757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.603894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.603974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.604016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.604128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.604273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.604298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.604405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.604525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.604552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.604661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.604768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.604798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.604934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.605063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.605091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.605197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.605326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.605350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.605457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.605569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.605595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.605712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.605837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.605865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.606030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.606138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.606164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.606306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.606425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.606465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.606572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.606654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.606728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.606838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.606917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.606942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.607035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.607166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.607194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.607292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.607381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.607410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.607553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.607697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.607723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.607878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.608121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.608395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.608608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.608831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.608927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.609039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.609152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.609176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.609266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.609396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.609424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.609573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.609699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.609743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.609854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.609963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.609997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.610163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.610248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.610279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.610420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.610548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.610574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.610704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.610813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.610838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.610947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.611048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.611073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.611191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.611319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.611346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.611453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.611588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.611613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.611694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.611805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.611830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.611944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.612213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.612415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.612624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.612875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.612986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.613101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.613188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.613214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.613346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.613471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.613500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.613612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.613731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.613757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.617 qpair failed and we were unable to recover it. 00:30:09.617 [2024-11-17 19:39:07.613838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.613970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.617 [2024-11-17 19:39:07.613999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.614166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.614305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.614330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.614458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.614535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.614560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.614654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.614745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.614771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.614905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.614998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.615026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.615158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.615270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.615296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.615415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.615522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.615565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.615707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.615797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.615825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.615957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.616057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.616082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.616213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.616335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.616364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.616451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.616542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.616571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.616706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.616812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.616838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.616945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.617039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.617067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.617165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.617286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.617313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.617448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.617567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.617592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.617700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.617824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.617852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.617973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.618104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.618132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.618266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.618350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.618375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.618526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.618610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.618638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.618726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.618852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.618881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.619031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.619114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.619139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.619252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.619369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.619412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.619512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.619601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.619629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.619754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.619839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.619865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.619942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.620120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.620145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.620257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.620362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.620404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.620550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.620638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.620663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.620813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.620896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.620941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.621031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.621141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.621169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.621311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.621422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.621447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.621540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.621672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.621721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.621833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.621938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.621963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.622068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.622215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.622240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.622369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.622496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.622523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.622671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.622769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.622797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.622932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.623020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.623047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.623178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.623334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.623364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.623456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.623580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.623609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.623767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.623882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.623908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.624016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.624166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.624194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.624319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.624419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.624447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.624570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.624702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.624728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.624865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.624967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.624996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.625118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.625242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.625270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.625376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.625488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.625513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.625625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.625739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.618 [2024-11-17 19:39:07.625768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.618 qpair failed and we were unable to recover it. 00:30:09.618 [2024-11-17 19:39:07.625893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.626042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.626067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.626206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.626353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.626395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.626516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.626631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.626659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.626803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.626911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.626936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.627021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.627095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.627121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.627257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.627337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.627380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.627533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.627660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.627705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.627819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.627924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.627949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.628040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.628132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.628159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.628254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.628366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.628394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.628513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.628649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.628681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.628782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.628877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.628906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.629027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.629123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.629151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.629250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.629360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.629385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.629496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.629613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.629641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.629740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.629862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.629890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.630021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.630129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.630154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.630279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.630366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.630394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.630509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.630620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.630648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.630774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.630878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.630903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.631037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.631156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.631185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.631282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.631411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.631439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.631577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.631717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.631744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.631878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.631967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.632000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.632161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.632288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.632316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.632463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.632575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.632600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.632719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.632797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.632826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.632949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.633042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.633071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.633201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.633338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.633363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.633535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.633642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.633668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.633814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.633896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.633921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.634005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.634113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.634139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.634222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.634325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.634366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.634513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.634631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.634659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.634807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.634926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.634951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.635029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.635117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.635144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.635290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.635400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.635426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.635536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.635670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.635703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.635804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.635953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.635989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.636129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.636241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.636266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.636388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.636523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.636548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.636692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.636820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.636849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.636936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.637072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.637101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.637203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.637280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.637305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.637418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.637551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.637579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.637679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.637794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.637823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.637980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.638090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.638116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.638277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.638395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.638423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.638515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.638664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.638699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.638831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.638918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.638943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.639057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.639172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.639197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.639319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.639404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.639429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.639542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.639704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.639731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.639882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.640007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.640033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.619 [2024-11-17 19:39:07.640157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.640250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.619 [2024-11-17 19:39:07.640278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.619 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.640384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.640469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.640494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.640598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.640754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.640780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.640857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.640969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.641001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.641085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.641171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.641196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.641307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.641478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.641505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.641628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.641765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.641790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.641942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.642068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.642093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.642207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.642317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.642342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.642500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.642620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.642648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.642782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.642864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.642891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.643030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.643154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.643182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.643327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.643447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.643475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.643634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.643756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.643783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.643934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.644077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.644105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.644232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.644319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.644348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.644450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.644587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.644612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.644730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.644819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.644847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.644944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.645047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.645075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.645205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.645307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.645332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.645458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.645549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.645576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.645723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.645831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.645856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.645928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.646104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.646128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.646250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.646359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.646388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.646505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.646625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.646654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.646765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.646879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.646904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.647026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.647151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.647178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.647292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.647420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.647448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.647588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.647704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.647734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.647870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.648040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.648065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.648183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.648335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.648362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.648527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.648617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.648642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.648805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.648918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.648944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.649062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.649153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.649194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.649325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.649406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.649431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.649513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.649616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.649640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.649779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.649900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.649927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.650074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.650188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.650213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.650334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.650463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.650490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.650629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.650732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.650761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.650892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.651025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.651050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.651207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.651399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.651427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.651587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.651700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.651727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.651812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.651914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.651939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.652030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.652141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.652165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.652285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.652366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.652407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.652517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.652654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.652685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.652816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.653008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.653036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.653147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.653272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.653300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.653457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.653576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.653601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.653755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.653843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.653867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.653956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.654043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.654068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.654281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.654384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.654409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.620 qpair failed and we were unable to recover it. 00:30:09.620 [2024-11-17 19:39:07.654562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.620 [2024-11-17 19:39:07.654657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.654692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.654841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.654934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.654962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.655091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.655191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.655216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.655362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.655473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.655500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.655585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.655660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.655696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.655829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.655946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.655971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.656064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.656182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.656207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.656293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.656404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.656432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.656557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.656648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.656694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.656817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.656924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.656952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.657079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.657199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.657227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.657386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.657492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.657517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.657604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.657698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.657726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.657821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.657913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.657941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.658079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.658188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.658212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.658374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.658469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.658496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.658615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.658752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.658780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.658943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.659072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.659098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.659205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.659333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.659361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.659482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.659595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.659622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.659725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.659827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.659853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.659941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.660101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.660129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.660230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.660375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.660403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.660545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.660654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.660683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.660772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.660905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.660935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.661050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.661172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.661201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.661331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.661441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.661473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.661566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.661685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.661711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.661797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.661906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.661933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.662085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.662198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.662223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.662313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.662419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.662444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.662536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.662664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.662698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.662826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.662913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.662938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.663038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.663129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.663157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.663274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.663400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.663428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.663526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.663694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.663735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.663850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.663928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.663969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.664089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.664212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.664254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.664359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.664471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.664496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.664609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.664705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.664731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.664838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.664935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.664960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.665032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.665144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.665169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.665282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.665412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.665441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.665555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.665699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.665727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.665861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.665975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.666000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.666135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.666207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.666232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.666363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.666483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.666512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.666659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.666737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.666763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.666874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.666990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.667018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.621 qpair failed and we were unable to recover it. 00:30:09.621 [2024-11-17 19:39:07.667136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.621 [2024-11-17 19:39:07.667286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.667313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.667423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.667529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.667554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.667640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.667748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.667777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.667899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.667989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.668017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.668116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.668194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.668221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.668303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.668409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.668435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.668552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.668716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.668744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.668879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.669079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.669107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.669220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.669373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.669401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.669497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.669619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.669649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.669757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.669848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.669873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.670033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.670161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.670186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.670297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.670408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.670436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.670577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.670684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.670710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.670846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.670935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.670960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.671038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.671146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.671171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.671284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.671360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.671385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.671464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.671573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.671597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.671729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.671849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.671876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.671977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.672053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.672078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.672158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.672265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.672290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.672386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.672534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.672575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.672701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.672787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.672813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.672925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.673170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.673354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.673597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.673863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.673990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.674102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.674227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.674256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.674406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.674525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.674551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.674712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.674819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.674845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.674927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.675006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.675031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.675146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.675283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.675311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.675459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.675585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.675626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.675702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.675795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.675820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.675949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.676100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.676128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.676280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.676374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.676401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.676560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.676683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.676708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.676846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.676945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.676986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.677112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.677197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.677239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.677405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.677545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.677570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.677763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.677902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.677933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.678076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.678199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.678224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.678338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.678428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.678456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.678575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.678738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.678764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.678906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.679051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.679080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.679218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.679349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.679374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.679504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.679597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.679625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.679768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.679884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.679909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.679999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.680133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.680159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.680294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.680475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.680503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.680649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.680799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.680824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.680940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.681025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.681049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.681163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.681267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.681292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.622 [2024-11-17 19:39:07.681380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.681532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.622 [2024-11-17 19:39:07.681559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.622 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.681691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.681803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.681828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.681914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.682142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.682352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.682595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.682828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.682969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.683082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.683194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.683219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.683358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.683433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.683459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.683568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.683684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.683709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.683817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.683895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.683920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.684044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.684135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.684163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.684287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.684430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.684458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.684587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.684704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.684730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.684843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.684991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.685018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.685140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.685235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.685264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.685346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.685474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.685499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.685625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.685786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.685812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.685935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.686040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.686067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.686199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.686277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.686301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.686420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.686500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.686539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.686658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.686821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.686846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.686959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.687076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.687102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.687242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.687400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.687428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.687551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.687645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.687680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.687816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.687899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.687924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.688011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.688125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.688150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.688225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.688339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.688365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.688480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.688585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.688610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.688710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.688804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.688829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.688906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.689040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.689068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.689226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.689304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.689329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.689457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.689589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.689616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.689736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.689840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.689865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.690003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.690091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.690115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.690221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.690300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.690325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.690439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.690546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.690586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.690698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.690776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.690801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.690918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.691073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.691101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.691196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.691315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.691342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.691502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.691608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.691633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.691730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.691845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.691870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.691964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.692125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.692152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.692284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.692395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.692419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.692517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.692628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.692654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.692816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.692949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.692976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.693127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.693209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.693234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.693346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.693450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.693474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.693557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.693670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.693711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.693815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.693902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.693927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.623 qpair failed and we were unable to recover it. 00:30:09.623 [2024-11-17 19:39:07.694036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.623 [2024-11-17 19:39:07.694178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.694206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.694333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.694429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.694457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.694642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.694785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.694811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.694925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.695063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.695090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.695215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.695309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.695335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.695468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.695576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.695600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.695716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.695809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.695837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.695963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.696055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.696082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.696206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.696341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.696365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.696468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.696591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.696617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.696718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.696847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.696874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.696979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.697105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.697128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.697228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.697378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.697404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.697524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.697620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.697647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.697816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.697929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.697955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.698104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.698258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.698286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.698380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.698548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.698578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.698713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.698823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.698848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.698941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.699033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.699061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.699185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.699307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.699334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.699465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.699574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.699599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.699705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.699873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.699898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.700014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.700154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.700184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.700299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.700419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.700444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.700528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.700652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.700697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.700823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.700939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.700970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.701075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.701156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.701182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.701324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.701470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.701498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.701618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.701717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.701745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.701912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.702025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.702049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.702221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.702318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.702342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.702457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.702540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.702563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.702669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.702757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.702783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.702934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.703051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.703078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.703177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.703299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.703326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.703474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.703604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.703636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.703749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.703854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.703878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.703981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.704071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.704097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.704224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.704361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.704385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.704549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.704669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.704704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.704829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.704950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.704976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.705090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.705194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.705219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.705374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.705498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.705526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.705646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.705813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.705841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.705947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.706054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.706078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.706195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.706309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.706339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.706445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.706556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.706580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.706670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.706788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.706814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.706971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.707129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.707157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.707249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.707366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.707392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.707502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.707636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.707660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.707825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.707951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.707978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.708133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.708257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.708283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.708396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.708486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.708511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.624 [2024-11-17 19:39:07.708640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.708738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.624 [2024-11-17 19:39:07.708767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.624 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.708888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.708985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.709018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.709150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.709258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.709284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.709423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.709568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.709596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.709709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.709802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.709829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.709992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.710106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.710131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.710235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.710331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.710359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.710444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.710571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.710599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.710704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.710843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.710868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.710976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.711120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.711148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.711267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.711378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.711405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.711557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.711637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.711661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.711783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.711913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.711941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.712060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.712183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.712210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.712346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.712449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.712474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.712590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.712698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.712726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.712817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.712939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.712967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.713102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.713240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.713265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.713383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.713470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.713498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.713591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.713723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.713752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.713894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.714044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.714069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.714173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.714299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.714326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.714453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.714576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.714603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.714708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.714793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.714818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.714904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.715015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.715039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.715132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.715261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.715289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.715422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.715532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.715557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.715712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.715833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.715860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.715955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.716052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.716079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.716217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.716306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.716330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.716434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.716545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.716569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.716655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.716792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.716820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.716961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.717102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.717127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.717260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.717347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.717374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.717498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.717586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.717613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.717716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.717840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.717865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.718022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.718173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.718200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.718347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.718470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.718498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.718633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.718726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.718752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.718871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.719018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.719045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.719168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.719278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.719305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.719436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.719543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.719569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.719657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.719784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.719811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.719931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.720094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.720119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.720206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.720343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.720368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.720492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.720608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.720649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.720733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.720866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.720892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.720972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.721074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.721098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.721224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.721376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.721405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.721503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.721657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.721689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.721767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.721878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.721904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.722014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.722097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.722141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.722233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.722353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.722393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.625 qpair failed and we were unable to recover it. 00:30:09.625 [2024-11-17 19:39:07.722519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.625 [2024-11-17 19:39:07.722666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.722717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.722797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.722901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.722926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.723039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.723158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.723185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.723334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.723418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.723442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.723581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.723717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.723746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.723917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.724062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.724086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.724194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.724264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.724288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.724367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.724453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.724495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.724592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.724745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.724773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.724923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.725004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.725028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.725162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.725249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.725275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.725395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.725507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.725534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.725621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.725757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.725784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.725900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.725983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.726008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.726096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.726174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.726199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.726285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.726367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.726390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.726528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.726679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.726708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.726820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.726909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.726934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.727048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.727158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.727182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.727344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.727434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.727461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.727561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.727693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.727722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.727837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.727923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.727948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.728094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.728180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.728207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.728299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.728430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.728454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.728565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.728671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.728700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.728806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.728891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.728919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.729042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.729129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.729157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.729291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.729407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.729431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.729547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.729663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.729721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.729847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.729937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.729964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.730105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.730189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.730215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.730294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.730395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.730421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.730544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.730635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.730662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.730779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.730870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.730895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.731034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.731149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.731175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.731298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.731396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.731423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.731533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.731687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.731713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.731801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.731911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.731935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.732050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.732164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.732193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.732320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.732437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.732461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.732589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.732721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.732749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.732871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.732995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.733022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.733159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.733232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.733256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.733361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.733511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.733538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.733663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.733797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.733824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.733954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.734066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.734091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.734216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.734335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.734375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.734488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.734620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.734647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.734759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.734896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.734922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.626 [2024-11-17 19:39:07.735057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.735214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.626 [2024-11-17 19:39:07.735243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.626 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.735345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.735465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.735493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.735592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.735712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.735738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.735827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.735948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.735976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.736101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.736267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.736295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.736420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.736531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.736554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.736689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.736787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.736817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.736981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.737110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.737139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.737275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.737388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.737412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.737494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.737606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.737630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.737742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.737849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.737873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.737983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.738060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.738084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.738206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.738283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.738310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.738424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.738573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.738601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.738711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.738814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.738839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.738967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.739100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.739125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.739241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.739337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.739367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.739480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.739563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.739588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.739733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.739826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.739853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.739958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.740037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.740063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.740170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.740277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.740302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.740418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.740534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.740558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.740643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.740745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.740770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.740856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.740995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.741019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.741170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.741255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.741283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.741404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.741527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.741555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.741660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.741745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.741771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.741882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.742014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.742043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.742132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.742251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.742279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.742406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.742541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.742566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.742671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.742805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.742838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.742949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.743037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.743063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.743179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.743289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.743314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.743438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.743593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.743620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.743749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.743867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.743894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.744028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.744137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.744163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.744260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.744378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.744404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.744498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.744594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.744623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.744785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.744876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.744900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.745035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.745143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.745166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.745246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.745328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.745372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.745518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.745622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.745647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.745746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.745852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.745877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.745963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.746102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.746126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.746207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.746292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.746317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.746403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.746513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.746538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.746649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.746761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.746786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.746897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.746980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.747004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.747108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.747186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.747210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.747321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.747443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.747470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.747620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.747754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.747784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.747865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.747946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.747970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.748054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.748160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.748185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.748303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.748395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.748420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.748526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.748603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.627 [2024-11-17 19:39:07.748628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.627 qpair failed and we were unable to recover it. 00:30:09.627 [2024-11-17 19:39:07.748757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.748856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.748884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.748986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.749069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.749094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.749187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.749273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.749297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.749397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.749492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.749520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.749629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.749756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.749781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.749943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.750038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.750071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.750230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.750383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.750411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.750539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.750622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.750646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.750744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.750877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.750903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.751006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.751107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.751135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.751271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.751359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.751383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.751464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.751605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.751647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.751789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.751901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.751926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.752012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.752116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.752141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.752240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.752335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.752363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.752514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.752637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.752666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.752818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.752904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.752929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.753048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.753172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.753200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.753322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.753442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.753470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.753623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.753736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.753763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.753932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.754028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.754055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.754170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.754267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.754294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.754395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.754531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.754556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.754649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.754779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.754804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.754931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.755086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.755113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.755242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.755356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.755382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.755515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.755612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.755640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.755769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.755923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.755951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.756113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.756224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.756250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.756391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.756487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.756515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.756652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.756782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.756824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.756913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.757005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.757032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.757140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.757245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.757271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.757363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.757479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.757508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.757617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.757816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.757842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.757955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.758077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.758101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.758202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.758331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.758358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.758460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.758569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.758594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.758717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.758841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.758869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.758985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.759103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.759132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.759269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.759384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.759409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.759504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.759628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.759656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.759786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.759905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.759934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.760080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.760193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.760218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.760331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.760418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.760443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.760543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.760648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.760679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.760774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.760857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.760882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.760965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.761099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.761127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.761216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.761308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.761350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.628 qpair failed and we were unable to recover it. 00:30:09.628 [2024-11-17 19:39:07.761437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.761574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.628 [2024-11-17 19:39:07.761599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.761773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.761885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.761911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.762039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.762164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.762191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.762301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.762438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.762462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.762611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.762706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.762741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.762911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.763046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.763070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.763150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.763261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.763286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.763420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.763568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.763595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.763695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.763856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.763883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.764021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.764158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.764183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.764285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.764394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.764422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.764506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.764588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.764616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.764770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.764852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.764878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.765002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.765127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.765154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.765271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.765390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.765418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.765527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.765617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.765642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.765786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.765896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.765920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.766062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.766188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.766215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.766319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.766433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.766458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.766572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.766760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.766786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.766899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.767010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.767037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.767174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.767288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.767312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.767414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.767505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.767534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.767667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.767829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.767857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.767962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.768042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.768066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.768194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.768293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.768320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.768467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.768614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.768641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.768793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.768873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.768898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.768992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.769083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.769112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.769246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.769375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.769400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.769482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.769593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.769617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.769776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.769898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.769926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.770049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.770215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.770240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.770316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.770402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.770428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.770559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.770684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.770714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.770807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.770907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.770932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.771048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.771165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.771189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.771307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.771412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.771436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.771546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.771647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.771680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.771783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.771884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.771909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.771996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.772118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.772146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.772242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.772370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.772398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.772537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.772653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.772683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.772829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.772951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.772978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.773091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.773212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.773240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.773346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.773452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.773476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.773614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.773709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.773739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.773857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.773981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.774011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.774145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.774227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.774254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.774410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.774533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.774560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.774687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.774816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.774842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.774929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.775006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.775031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.775116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.775212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.775239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.775385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.775466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.775493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.629 qpair failed and we were unable to recover it. 00:30:09.629 [2024-11-17 19:39:07.775593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.775750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.629 [2024-11-17 19:39:07.775776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.775893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.776044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.776071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.776165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.776255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.776283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.776397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.776506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.776531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.776608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.776737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.776765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.776887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.777038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.777066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.777200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.777308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.777334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.777493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.777615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.777655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.777747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.777853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.777878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.777991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.778101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.778125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.778253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.778378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.778407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.778581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.778700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.778725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.778833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.778945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.778969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.779126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.779254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.779282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.779403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.779504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.779530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.779641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.779734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.779759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.779897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.780019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.780046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.780191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.780347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.780374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.780473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.780583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.780607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.780693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.780803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.780831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.780953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.781051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.781078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.781177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.781266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.781290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.781379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.781510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.781551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.781665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.781832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.781860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.781999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.782225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.782423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.782641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.782862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.782992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.783108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.783244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.783269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.783381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.783458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.783484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.783586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.783747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.783773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.783858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.783982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.784010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.784110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.784192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.784221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.784333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.784469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.784496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.784606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.784705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.784734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.784835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.784943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.784970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.785093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.785211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.785238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.785388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.785507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.785535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.785685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.785825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.785850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.785963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.786089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.786118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.786268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.786386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.786413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.786576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.786656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.786686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.786816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.786969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.787001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.787134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.787282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.787308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.787457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.787543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.787568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.787719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.787812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.787855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.788007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.788122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.788150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.788253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.788359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.788385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.788478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.788562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.788588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.630 qpair failed and we were unable to recover it. 00:30:09.630 [2024-11-17 19:39:07.788747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.788870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.630 [2024-11-17 19:39:07.788898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.789008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.789118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.789143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.789259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.789396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.789425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.789534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.789624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.789652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.789825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.789954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.789984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.790127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.790284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.790327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.790420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.790522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.790548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.790660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.790757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.790783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.790895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.790990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.791018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.791095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.791213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.791243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.791352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.791427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.791453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.791565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.791700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.791727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.791835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.791919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.791944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.792075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.792164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.792190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.792336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.792449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.792474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.792589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.792700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.792726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.792809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.792920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.792946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.793060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.793172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.793199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.793320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.793424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.793450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.793562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.793670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.793702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.793806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.793886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.793912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.794049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.794136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.794162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.794278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.794391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.794416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f034c000b90 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.794514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.794691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.794725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.794846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.794992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.795020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.795161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.795256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.795284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.795434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.795604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.795632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.795751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.795890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.795915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.796036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.796175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.796204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.796339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.796446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.796475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.796604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.796694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.796738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.796829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.796945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.796987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.797112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.797231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.797259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.797343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.797457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.797483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.797624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.797769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.797796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.797881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.797962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.797988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.798074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.798247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.798290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.798386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.798536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.798566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.798703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.798811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.798837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.798925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.799066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.799094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.799246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.799364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.799393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.799503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.799665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.799700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.799835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.799915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.799939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.800083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.800171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.800199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.800326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.800460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.800489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.800602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.800738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.800764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.800852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.800939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.800964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.801073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.801187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.801213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.801321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.801430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.801471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.801637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.801781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.801808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.801959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.802066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.802091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.802203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.802329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.802357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.802473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.802609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.802636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.802775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.802856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.802881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.631 qpair failed and we were unable to recover it. 00:30:09.631 [2024-11-17 19:39:07.803001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.803139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.631 [2024-11-17 19:39:07.803167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.803281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.803432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.803460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.803619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.803747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.803773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.803857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.803979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.804015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.804105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.804227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.804254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.804400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.804552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.804579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.804722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.804837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.804862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.804976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.805058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.805084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.805179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.805261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.805287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.805383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.805500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.805528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.805628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.805765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.805796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.805914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.806023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.806054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.806187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.806313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.806341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.806462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.806586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.806619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.806745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.806856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.806882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.806959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.807049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.807074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.807195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.807280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.807307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.807416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.807523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.807551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.807672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.807780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.807805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.807885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.807978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.808006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.808142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.808253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.808278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.808421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.808511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.808539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.808644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.808818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.808845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.808970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.809058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.809083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.809199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.809305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.809349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.809434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.809552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.809581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.809719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.809839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.809865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.809970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.810098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.810126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.810242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.810347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.810372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.810576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.810725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.810752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.810864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.810948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.810972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.811069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.811215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.811240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.811355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.811497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.811522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.811633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.811724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.811751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.811834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.811916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.811941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.812051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.812161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.812186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.812313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.812405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.812449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.812569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.812649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.812681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.812769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.812851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.812878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.813005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.813089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.813117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.813204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.813324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.813354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.813465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.813584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.813611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.813748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.813832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.813857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.813943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.814108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.814150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.814257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.814337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.814362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.814475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.814614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.814642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.814761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.814850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.814875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.814954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.815039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.815064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.815196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.815327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.815352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.815489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.815584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.815613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.815728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.815821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.815846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.815920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.816044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.816077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.816206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.816301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.816330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.816430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.816566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.816591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.816698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.816801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.632 [2024-11-17 19:39:07.816826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.632 qpair failed and we were unable to recover it. 00:30:09.632 [2024-11-17 19:39:07.816911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.817129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.817316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.817556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.817805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.817938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.818025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.818159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.818188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.818283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.818371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.818405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.818512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.818623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.818648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.818775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.818861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.818889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.818983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.819110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.819137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.819246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.819359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.819385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.819490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.819583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.819611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.819718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.819802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.819830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.819934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.820023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.820050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.820128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.820246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.820288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.820420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.820553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.820579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.820665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.820759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.820785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.820876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.820973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.821001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.821120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.821241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.821269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.821398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.821479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.821504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.821599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.821679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.821705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.821813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.821902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.821927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.822003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.822118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.822144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.822251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.822331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.822355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.822462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.822562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.822591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.822700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.822775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.822800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.822896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.822978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.823005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.823123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.823255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.823283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.823423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.823512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.823537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.823623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.823720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.823746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.823831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.823953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.823978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.824101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.824209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.824234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.824360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.824452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.824481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.824564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.824663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.824699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.824808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.824919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.824944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.825052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.825171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.825199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.825318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.825439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.825467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.825565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.825700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.825733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.825856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.825943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.825968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.826054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.826138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.826163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.826294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.826377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.826402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.826516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.826600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.826629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.826763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.826857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.826885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.826990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.827071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.827096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.827181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.827313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.827343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.827469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.827606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.827631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.827755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.827833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.827858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.827968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.828049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.828079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.828190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.828279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.828307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.828404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.828523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.828549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.633 qpair failed and we were unable to recover it. 00:30:09.633 [2024-11-17 19:39:07.828634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.633 [2024-11-17 19:39:07.828780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.828808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.828962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.829106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.829134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.829231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.829310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.829335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.829421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.829531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.829557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.829669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.829769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.829813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.829927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.829999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.830024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.830108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.830254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.830279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.830405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.830488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.830515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.830651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.830762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.830789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.830928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.831047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.831075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.831158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.831278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.831320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.831434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.831510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.831535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.831670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.831773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.831798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.831876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.831975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.832004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.832129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.832239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.832264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.832404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.832499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.832528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.832656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.832794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.832823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.832927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.833042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.833068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.833162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.833268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.833293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.833436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.833516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.833544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.833651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.833766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.833793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.833895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.833987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.834015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.834140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.834268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.834296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.834420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.834524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.834549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.834701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.834792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.834819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.834894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.835052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.835081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.835188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.835274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.835299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.835395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.835471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.835496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.835610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.835745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.835774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.835904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.835991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.836016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.836099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.836209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.836234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.836369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.836461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.836489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.836615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.836725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.836751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.836855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.836936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.836964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.837045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.837164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.837192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.837294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.837406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.837431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.837589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.837706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.837735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.837894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.838022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.838051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.838153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.838276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.838301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.838417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.838516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.838544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.838713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.838862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.838891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.838998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.839080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.839106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.839197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.839330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.839357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.839480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.839604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.839632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.839750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.839832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.839857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.839944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.840034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.840063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.840188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.840272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.840300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.840400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.840513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.840539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.840618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.840761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.840794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.840891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.841011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.841039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.841161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.841236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.841261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.841340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.841448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.841475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.841598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.841722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.841751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.841884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.841995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.634 [2024-11-17 19:39:07.842020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.634 qpair failed and we were unable to recover it. 00:30:09.634 [2024-11-17 19:39:07.842117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.842216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.842245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.842336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.842456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.842484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.842601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.842709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.842735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.842845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.842925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.842967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.843054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.843174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.843202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.843320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.843442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.843467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.843599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.843696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.843724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.843849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.843999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.844027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.844153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.844266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.844291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.844387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.844528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.844554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.844637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.844737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.844763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.844937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.845139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.845381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.845624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.845871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.845976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.846066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.846154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.846180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.846283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.846386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.846411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.846528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.846648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.846689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.846823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.846944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.846972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.847110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.847233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.847259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.847337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.847419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.847461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.847563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.847687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.847722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.847859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.847943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.847968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.848060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.848156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.848181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.848260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.848358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.848386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.848496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.848573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.848598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.848684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.848769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.848796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.848886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.849020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.849048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.849219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.849334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.849359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.849501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.849588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.849615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.849764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.849861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.849889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.849993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.850082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.850108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.850194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.850308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.850333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.850430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.850520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.850548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.850647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.850797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.850828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.850925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.851035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.851060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.851147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.851260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.851286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.851426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.851519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.851544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.851648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.851736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.851764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.851915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.852064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.852105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.852213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.852296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.635 [2024-11-17 19:39:07.852321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.635 qpair failed and we were unable to recover it. 00:30:09.635 [2024-11-17 19:39:07.852450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.636 [2024-11-17 19:39:07.852534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.636 [2024-11-17 19:39:07.852561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.636 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.852689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.852785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.852813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.852916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.853038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.853074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.853178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.853309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.853349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.853518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.853642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.853683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.853833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.853921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.853946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.854045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.854165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.854193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.854294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.854385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.854414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.854531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.854670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.854711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.854809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.854907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.854947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.855079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.855197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.855235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.855394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.855502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.855538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.855684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.855811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.855850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.855972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.856157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.856195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.856340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.856465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.856493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.856598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.856694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.856724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.856849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.856986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.857020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.857127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.857258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.857292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.857411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.857521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.857561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.857686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.857852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.857891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.858051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.858183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.858237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.858387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.858485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.858513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.858608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.858734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.858777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.858860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.858940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.858965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.859099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.859220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.859258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.859373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.859516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.859565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.859711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.859873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.859909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.860035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.860195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.860229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.917 qpair failed and we were unable to recover it. 00:30:09.917 [2024-11-17 19:39:07.860368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.860494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.917 [2024-11-17 19:39:07.860523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.860632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.860725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.860752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.860831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.860956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.860998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.861086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.861182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.861210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.861320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.861438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.861464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.861593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.861756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.861798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.861945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.862060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.862102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.862248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.862384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.862431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.862617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.862732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.862771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.862891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.863059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.863101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.863232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.863360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.863397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.863526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.863667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.863707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.863859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.863946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.863971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.864094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.864179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.864204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.864353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.864442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.864471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.864657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.864799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.864833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.864941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.865072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.865112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.865271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.865415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.865453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.865603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.865752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.865793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.865916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.866043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.866079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.866195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.866370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.866401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.866530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.866626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.866654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.866820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.866951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.866976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.867109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.867274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.867299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.867391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.867482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.867524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.867670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.867822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.867873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.868053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.868166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.868206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.868365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.868476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.868516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.868649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.868791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.868828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.918 qpair failed and we were unable to recover it. 00:30:09.918 [2024-11-17 19:39:07.868986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.918 [2024-11-17 19:39:07.869102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.869131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.869262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.869359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.869387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.869493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.869616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.869642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.869807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.869956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.869996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.870122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.870259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.870299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.870436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.870590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.870625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.870767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.870880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.870920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.871102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.871190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.871218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.871337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.871426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.871453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.871533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.871659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.871691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.871810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.871914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.871959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.872046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.872161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.872195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.872347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.872493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.872531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.872700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.872836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.872870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.873022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.873121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.873157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.873262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.873367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.873415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.873508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.873636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.873665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.873791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.873901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.873927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.874019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.874098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.874124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.874227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.874387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.874422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.874566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.874710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.874748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.874880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.875008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.875042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.875168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.875342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.875383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.875528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.875617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.875642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.875739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.875825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.875851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.875994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.876111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.876140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.876281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.876440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.876494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.876670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.876848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.876884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.919 [2024-11-17 19:39:07.877050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.877176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.919 [2024-11-17 19:39:07.877213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.919 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.877346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.877473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.877503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.877591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.877688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.877714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.877798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.877881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.877907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.878022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.878131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.878157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.878279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.878418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.878447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.878559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.878695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.878729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.878858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.878957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.878991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.879121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.879233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.879271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.879398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.879516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.879554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.879695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.879835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.879875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.880017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.880165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.880195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.880326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.880420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.880448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.880586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.880668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.880699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.880795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.880879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.880905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.880995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.881082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.881108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.881291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.881373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.881408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.881539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.881664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.881710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.881888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.882030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.882070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.882252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.882410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.882464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.882603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.882710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.882741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.882868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.882963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.882991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.883098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.883211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.883236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.883369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.883528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.883564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.883668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.883805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.883840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.884006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.884106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.884141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.884302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.884435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.884469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.884623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.884799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.884827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.884942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.885057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.885082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.920 [2024-11-17 19:39:07.885165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.885277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.920 [2024-11-17 19:39:07.885302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.920 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.885441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.885536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.885569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.885713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.885841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.885867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.885985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.886137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.886165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.886260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.886385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.886421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.886531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.886638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.886689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.886823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.886969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.887009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.887160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.887305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.887345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.887533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.887661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.887709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.887830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.887922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.887950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.888051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.888167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.888209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.888314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.888427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.888462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.888624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.888758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.888797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.888941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.889089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.889128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.889293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.889425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.889460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.889564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.889706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.889735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.889853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.889957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.889986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.890086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.890176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.890201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.890290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.890428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.890457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.890586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.890701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.890731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.890888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.891024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.891049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.891205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.891324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.891371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.891527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.891635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.891688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.891821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.891950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.891984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.892138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.892283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.921 [2024-11-17 19:39:07.892322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.921 qpair failed and we were unable to recover it. 00:30:09.921 [2024-11-17 19:39:07.892444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.892581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.892610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.892752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.892858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.892884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.893022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.893152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.893181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.893313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.893450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.893489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.893653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.893817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.893853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.894028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.894158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.894193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.894344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.894485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.894516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.894639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.894759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.894786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.894925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.895018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.895046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.895169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.895260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.895288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.895455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.895583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.895634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.895763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.895908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.895961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.896098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.896232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.896266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.896411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.896566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.896602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.896735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.896861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.896887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.896972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.897141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.897169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.897305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.897414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.897448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.897582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.897749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.897789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.897948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.898103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.898144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.898338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.898469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.898519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.898663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.898873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.898904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.899030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.899157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.899185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.899348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.899434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.899459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.899551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.899659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.899691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.899809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.899897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.899923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.900034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.900138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.900173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.900328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.900469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.900508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.900661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.900836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.900875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.922 qpair failed and we were unable to recover it. 00:30:09.922 [2024-11-17 19:39:07.901037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.901169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.922 [2024-11-17 19:39:07.901202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.901321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.901479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.901505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.901582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.901661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.901694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.901792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.901899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.901925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.902064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.902180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.902220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.902333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.902497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.902537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.902667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.902849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.902902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.903056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.903204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.903243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.903399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.903525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.903553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.903666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.903790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.903816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.903931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.904026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.904051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.904183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.904335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.904371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.904533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.904654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.904716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.904838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.904950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.904989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.905168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.905310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.905352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.905486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.905569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.905595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.905707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.905816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.905858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.905980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.906075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.906123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.906210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.906344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.906380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.906499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.906665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.906712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.906861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.907003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.907048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.907231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.907367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.907404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.907562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.907667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.907704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.907785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.907902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.907930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.908038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.908122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.908147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.908228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.908377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.908416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.908538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.908704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.908739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.908873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.908977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.909014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.909200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.909349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.909387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.923 qpair failed and we were unable to recover it. 00:30:09.923 [2024-11-17 19:39:07.909539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.923 [2024-11-17 19:39:07.909639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.909668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.909825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.909914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.909940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.910069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.910195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.910236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.910378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.910460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.910485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.910589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.910728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.910755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.910893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.910976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.911001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.911110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.911211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.911239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.911395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.911495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.911529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.911655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.911796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.911835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.912016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.912155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.912194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.912323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.912481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.912519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.912632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.912743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.912772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.912870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.912961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.912989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.913100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.913182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.913207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.913292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.913431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.913459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.913609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.913703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.913744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.913832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.913955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.913990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.914172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.914294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.914332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.914475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.914627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.914666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.914838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.914967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.915001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.915182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.915338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.915368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.915497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.915621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.915649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.915794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.915913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.915948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.916111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.916251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.916289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.916441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.916585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.916624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.916800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.916936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.916987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.917141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.917256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.917284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.918139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.918280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.918309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.918439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.918560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.918599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.924 qpair failed and we were unable to recover it. 00:30:09.924 [2024-11-17 19:39:07.918737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.924 [2024-11-17 19:39:07.918896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.918931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.919067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.919235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.919289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.919466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.919600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.919654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.919822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.919944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.919970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.920082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.920224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.920250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.920344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.920450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.920486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.920611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.920776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.920812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.920944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.921055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.921090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.921247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.921419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.921458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.921576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.921709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.921745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.921795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbfe00 (9): Bad file descriptor 00:30:09.925 [2024-11-17 19:39:07.921999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.922122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.922150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.922274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.922381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.922407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.922492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.922573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.922598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.922717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.922837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.922864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.923004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.923118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.923144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.923258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.923351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.923376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.923460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.923570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.923596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.923687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.923805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.923831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.923942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.924087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.924114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.924202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.924297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.924324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.924414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.924525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.924549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.924653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.924767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.924793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.924909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.924989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.925016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.925107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.925203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.925230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.925373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.925463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.925488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.925604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.925694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.925721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.925804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.925944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.925970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.926060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.926174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.926200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.925 qpair failed and we were unable to recover it. 00:30:09.925 [2024-11-17 19:39:07.926335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.925 [2024-11-17 19:39:07.926499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.926541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.926667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.926793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.926818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.926959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.927213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.927432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.927646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.927884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.927996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.928071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.928208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.928234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.928320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.928405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.928430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.928547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.928663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.928697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.928807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.928891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.928917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.929004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.929115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.929141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.929231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.929310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.929336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.929449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.929535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.929560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.929648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.929765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.929791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.929899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.929986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.930014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.930127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.930239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.930265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.930355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.930439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.930465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.930556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.930632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.930658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.930779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.930862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.930889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.930976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.931198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.931420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.931610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.931811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.931923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.926 [2024-11-17 19:39:07.932037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.932141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.926 [2024-11-17 19:39:07.932175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.926 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.932276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.932360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.932386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.932459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.932541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.932567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.932649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.932803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.932830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.932911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.932993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.933019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.933109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.933215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.933242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.933355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.933439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.933467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.933574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.933658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.933691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.933831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.933934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.933961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.934049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.934164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.934189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.934304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.934380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.934410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.934498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.934610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.934635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.934730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.934845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.934870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.934960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.935047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.935072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.935150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.935263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.935289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.935397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.935575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.935601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.935708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.935792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.935817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.935943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.936057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.936082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.936168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.936310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.936337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.936432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.936562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.936588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.936668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.936782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.936812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.936928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.937162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.937409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.937630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.937829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.937940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.938047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.938136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.938178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.938320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.938433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.938459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.938559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.938683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.938709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.927 qpair failed and we were unable to recover it. 00:30:09.927 [2024-11-17 19:39:07.938793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.927 [2024-11-17 19:39:07.938872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.938898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.938979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.939175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.939388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.939606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.939844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.939950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.940036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.940115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.940140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.940258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.940377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.940403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.940515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.940593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.940619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.940739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.940827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.940853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.940945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.941066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.941092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.941204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.941311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.941336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.941427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.941529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.941570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.941688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.941797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.941822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.941908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.941989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.942014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.942123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.942203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.942229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.942381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.942489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.942515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.942629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.942778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.942805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.942880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.942966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.942992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.943105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.943191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.943217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.943329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.943418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.943443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.943539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.943620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.943645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.943779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.943865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.943890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.943985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.944074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.944100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.944189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.944302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.944327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.944456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.944542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.944589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.944688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.944769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.944796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.944881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.944981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.945006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.928 [2024-11-17 19:39:07.945079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.945194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.928 [2024-11-17 19:39:07.945220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.928 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.945334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.945443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.945468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.945589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.945668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.945701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.945817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.945906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.945931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.946019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.946105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.946132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.946246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.946380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.946408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.946505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.946600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.946629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.946773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.946891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.946917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.947005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.947119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.947144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.947233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.947316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.947342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.947428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.947545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.947571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.947684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.947770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.947795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.947889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.947998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.948023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.948114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.948198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.948224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.948334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.948428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.948457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.948558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.948653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.948691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.948827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.948942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.948967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.949051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.949150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.949177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.949340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.949423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.949452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.949568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.949649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.949681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.949804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.949891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.949916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.950027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.950110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.950135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.950225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.950313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.950339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.950455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.950564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.950589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.950697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.950815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.950841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.950922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.951043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.951069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.951191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.951292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.951319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.951429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.951561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.951590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.951728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.951811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.951837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.929 qpair failed and we were unable to recover it. 00:30:09.929 [2024-11-17 19:39:07.951930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.929 [2024-11-17 19:39:07.952021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.952047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.952154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.952264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.952290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.952403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.952487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.952513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.952598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.952755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.952781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.952868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.952979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.953007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.953142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.953251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.953276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.953406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.953491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.953521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.953686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.953788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.953817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.953925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.954026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.954054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.954183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.954280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.954309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.954405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.954523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.954552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.954682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.954799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.954825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.954937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.955153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.955430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.955626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.955838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.955954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.956065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.956174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.956200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.956346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.956430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.956459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.956628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.956760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.956786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.956876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.956969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.956995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.957086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.957171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.957197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.957311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.957450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.957478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.957575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.957689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.957715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.957805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.957889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.957915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.930 qpair failed and we were unable to recover it. 00:30:09.930 [2024-11-17 19:39:07.958044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.958182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.930 [2024-11-17 19:39:07.958219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.958342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.958467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.958507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.958604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.958700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.958738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.958888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.958997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.959026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.959152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.959265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.959293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.959449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.959617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.959645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.959747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.959858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.959885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.959972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.960051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.960078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.960169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.960259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.960285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.960440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.960562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.960587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.960723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.960826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.960852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.960970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.961094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.961119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.961206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.961288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.961314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.961418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.961507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.961532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.961619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.961713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.961745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.961859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.962011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.962039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.962205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.962297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.962322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.962427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.962551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.962579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.962704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.962835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.962861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.962955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.963044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.963071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.963154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.963290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.963331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.963445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.963526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.963551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.963662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.963781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.963807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.963899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.964065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.964093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.964188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.964312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.964340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.964446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.964527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.964552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.964687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.964781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.964808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.964899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.964995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.965023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.965186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.965296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.931 [2024-11-17 19:39:07.965329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.931 qpair failed and we were unable to recover it. 00:30:09.931 [2024-11-17 19:39:07.965443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.965528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.965555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.965662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.965825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.965851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.965947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.966059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.966086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.966175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.966285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.966311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.966396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.966504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.966529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.966622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.966718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.966744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.966861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.966988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.967016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.967164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.967313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.967342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.967466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.967603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.967628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.967718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.967805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.967832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.967917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.968053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.968083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.968199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.968288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.968314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.968456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.968553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.968582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.968692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.968846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.968871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.968976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.969085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.969110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.969273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.969404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.969433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.969582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.969724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.969751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.969833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.969916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.969941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.970059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.970138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.970163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.970247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.970378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.970407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.970546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.970653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.970691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.970831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.970918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.970955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.971096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.971207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.971233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.971343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.971448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.971485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.971591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.971736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.971763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.971882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.971985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.972013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.972141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.972221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.972247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.972366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.972455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.972480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.932 [2024-11-17 19:39:07.972585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.972702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.932 [2024-11-17 19:39:07.972731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.932 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.972817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.972914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.972940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.973020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.973100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.973126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.973246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.973354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.973384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.973499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.973587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.973613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.973702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.973788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.973814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.973933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.974061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.974090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.974253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.974359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.974384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.974488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.974611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.974639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.974760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.974878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.974904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.974999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.975106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.975133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.975247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.975397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.975426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.975572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.975664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.975742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.975839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.975953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.975983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.976072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.976177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.976206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.976336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.976459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.976502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.976627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.976726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.976753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.976840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.976922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.976948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.977035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.977190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.977219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.977349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.977460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.977486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.977591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.977704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.977731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.977840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.977951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.977976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.978067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.978203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.978228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.978311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.978422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.978452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.978563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.978650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.978681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.978776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.978865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.978892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.978980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.979069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.979095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.979177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.979257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.979283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.979398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.979482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.979508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.933 qpair failed and we were unable to recover it. 00:30:09.933 [2024-11-17 19:39:07.979615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.933 [2024-11-17 19:39:07.979705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.979741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.979832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.979911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.979936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.980050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.980129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.980155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.980238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.980327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.980352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.980432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.980550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.980576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.980696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.980807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.980833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.980915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.981053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.981083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.981171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.981307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.981332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.981417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.981530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.981557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.981645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.981771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.981797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.981906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.982176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.982362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.982598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.982879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.982998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.983092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.983175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.983200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.983315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.983400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.983426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.983522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.983650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.983681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.983776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.983888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.983915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.984003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.984089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.984115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.984208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.984316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.984341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.984424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.984531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.984557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.984651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.984775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.984802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.984914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.984994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.985020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.985139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.985226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.985252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.985346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.985473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.985499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.985583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.985704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.985731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.985836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.985950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.985976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.986067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.986193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.934 [2024-11-17 19:39:07.986218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.934 qpair failed and we were unable to recover it. 00:30:09.934 [2024-11-17 19:39:07.986312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.986420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.986446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.986529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.986608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.986634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.986777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.986866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.986891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.986989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.987181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.987402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.987619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.987861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.987975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.988085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.988192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.988218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.988324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.988437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.988462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.988582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.988666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.988697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.988830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.988915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.988941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.989021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.989130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.989156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.989244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.989361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.989404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.989524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.989654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.989692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.989813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.989920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.989945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.990092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.990221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.990249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.990370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.990490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.990519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.990639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.990742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.990770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.990892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.991076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.991102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.935 qpair failed and we were unable to recover it. 00:30:09.935 [2024-11-17 19:39:07.991223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.991321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.935 [2024-11-17 19:39:07.991351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.991523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.991633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.991659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.991840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.991932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.991959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.992054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.992206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.992234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.992369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.992463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.992489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.992607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.992714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.992751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.992867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.992943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.992969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.993111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.993193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.993218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.993334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.993454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.993482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.993583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.993710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.993744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.993851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.993965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.993990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.994080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.994162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.994205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.994329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.994421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.994450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.994564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.994649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.994682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.994798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.994901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.994928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.995068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.995159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.995189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.995332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.995419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.995445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.995530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.995669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.995703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.995831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.995921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.995971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.996107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.996189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.996216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.996307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.996446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.996471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.996555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.996638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.996664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.996763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.996875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.996902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.996986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.997096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.997124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.997222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.997311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.997341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.997477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.997556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.997581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.997665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.997803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.997832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.997965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.998088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.998116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.936 [2024-11-17 19:39:07.998251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.998326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.936 [2024-11-17 19:39:07.998352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.936 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:07.998517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.998610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.998640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:07.998758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.998851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.998880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:07.999053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.999143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.999169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:07.999272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.999390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.999420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:07.999508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.999661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.999700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:07.999839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.999959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:07.999984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.000064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.000177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.000202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.000283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.000450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.000479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.000616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.000735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.000762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.000845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.000976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.001005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.001127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.001260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.001289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.001402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.001519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.001545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.001631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.002024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.002057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.002163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.002287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.002315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.002430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.002521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.002549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.002691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.002806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.002832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.002929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.003046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.003071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.003182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.003300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.003326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.003410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.003502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.003528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.003608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.003748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.003774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.003862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.003990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.004015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.004150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.004299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.004327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.004456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.004563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.004590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.004730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.004844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.004870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.005021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.005117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.005147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.005236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.005357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.005385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.005518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.005610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.005635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.005729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.005853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.005879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.937 qpair failed and we were unable to recover it. 00:30:09.937 [2024-11-17 19:39:08.005963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.937 [2024-11-17 19:39:08.006100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.006142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.006253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.006347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.006374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.006464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.006573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.006598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.006681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.006801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.006827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.006955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.007230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.007423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.007648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.007855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.007990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.008095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.008171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.008201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.008316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.008412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.008438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.008548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.008682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.008711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.008802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.008923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.008953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.009089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.009211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.009237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.009352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.009442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.009467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.009548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.009669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.009722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.009812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.009908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.009934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.010016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.010209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.010235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.010354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.010499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.010527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.010667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.010762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.010792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.010988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.011114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.011143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.011281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.011397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.011422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.011614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.011707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.011749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.011867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.012065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.012094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.012217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.012310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.012340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.012472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.012586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.012612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.012745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.012889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.012915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.012997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.013125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.013153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-11-17 19:39:08.013259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.013363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.938 [2024-11-17 19:39:08.013389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.013496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.013648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.013684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.013824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.013905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.013930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.014024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.014103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.014128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.014219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.014352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.014380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.014481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.014577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.014605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.014733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.014848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.014874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.014947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.015029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.015054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.015170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.015321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.015349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.015468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.015598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.015626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.015835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.015921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.015948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.016086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.016206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.016239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.016352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.016432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.016457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.016579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.016686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.016732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.016831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.016984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.017013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.017120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.017211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.017237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.017327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.017483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.017511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.017609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.017756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.017782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.017921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.018027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.018052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.018157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.018307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.018335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.018428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.018548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.018576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.018717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.018810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.018836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.018968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.019088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.019116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.019242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.019369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.019398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.019595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.019687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.019713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.019841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.019989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.020017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.020113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.020270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.020298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.020435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.020521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.020546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.020620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.020751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.020794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-11-17 19:39:08.020921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.939 [2024-11-17 19:39:08.021044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.021072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.021175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.021290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.021316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.021457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.021581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.021609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.021738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.021891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.021920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.022024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.022135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.022161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.022272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.022406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.022434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.022583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.022714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.022743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.022850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.022964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.022989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.023103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.023268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.023296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.023388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.023511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.023540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.023683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.023769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.023795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.023881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.023996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.024024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.024117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.024239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.024268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.024395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.024512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.024537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.024657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.024770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.024796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.024912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.025048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.025076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.025186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.025300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.025325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.025444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.025579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.025608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.025764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.025867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.025896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.026016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.026137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.026164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.026248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.026366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.026392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.026559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.026657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.026694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.026798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.026887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.026913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.027011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.027165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.027194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-11-17 19:39:08.027316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.027417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.940 [2024-11-17 19:39:08.027445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.027555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.027641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.027667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.027826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.027910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.027951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.028104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.028194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.028222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.028354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.028482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.028509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.028627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.028796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.028822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.028907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.029192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.029385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.029578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.029804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.029908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.030041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.030127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.030155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.030289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.030404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.030432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.030581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.030709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.030752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.030865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.030951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.030982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.031068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.031245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.031270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.031386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.031469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.031496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.031587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.031735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.031764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.031888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.031997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.032025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.032172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.032284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.032309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.032421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.032556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.032585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.032709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.032810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.032839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.032960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.033050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.033076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.033155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.033277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.033302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.033409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.033500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.033528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.033643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.033743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.033771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.033892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.034040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.034068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.034190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.034282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.034310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.034470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.034558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.034583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.941 qpair failed and we were unable to recover it. 00:30:09.941 [2024-11-17 19:39:08.034716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.941 [2024-11-17 19:39:08.034812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.034841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.034991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.035109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.035138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.035247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.035328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.035354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.035467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.035554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.035579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.035725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.035873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.035901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.036002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.036142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.036167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.036277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.036450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.036475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.036589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.036705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.036735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.036850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.036990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.037016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.037147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.037274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.037302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.037426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.037554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.037582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.037691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.037781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.037806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.037894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.038014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.038039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.038127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.038269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.038298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.038435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.038546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.038572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.038654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.038802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.038831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.038919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.039005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.039033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.039141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.039230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.039255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.039462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.039595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.039623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.039772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.039865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.039892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.039984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.040075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.040101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.040258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.040411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.040439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.040590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.040710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.040739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.040870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.040983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.041008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.041108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.041274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.041300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.041408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.041503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.041529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.041657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.041779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.041805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.041925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.042003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.042048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.042171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.042268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.942 [2024-11-17 19:39:08.042297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.942 qpair failed and we were unable to recover it. 00:30:09.942 [2024-11-17 19:39:08.042428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.042512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.042538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.042646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.042798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.042825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.042929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.043052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.043081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.043225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.043359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.043384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.043515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.043656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.043690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.043810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.043903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.043931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.044096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.044234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.044259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.044393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.044528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.044553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.044666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.044809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.044838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.044969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.045075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.045100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.045210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.045355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.045383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.045510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.045686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.045713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.045824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.045914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.045939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.046067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.046220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.046248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.046341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.046490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.046518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.046658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.046795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.046821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.046960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.047111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.047139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.047267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.047393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.047420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.047519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.047657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.047690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.047836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.047993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.048018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.048108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.048231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.048260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.048367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.048508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.048534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.048626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.048768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.048795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.048922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.049033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.049058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.049166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.049246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.049270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.049391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.049495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.049524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.049644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.049753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.049796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.049936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.050020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.050047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.943 qpair failed and we were unable to recover it. 00:30:09.943 [2024-11-17 19:39:08.050163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.943 [2024-11-17 19:39:08.050273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.050316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.050406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.050528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.050558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.050696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.050804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.050830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.050992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.051084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.051117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.051244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.051390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.051418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.051554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.051703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.051730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.051832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.051924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.051955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.052108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.052259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.052287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.052412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.052525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.052552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.052728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.052842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.052869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.053025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.053145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.053173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.053307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.053434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.053459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.053564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.053688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.053716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.053867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.054005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.054036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.054163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.054247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.054274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.054414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.054508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.054536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.054622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.054745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.054774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.054911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.054993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.055018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.055172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.055286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.055314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.055466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.055558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.055586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.055719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.055804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.055829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.055941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.056053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.056081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.056256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.056383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.056408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.056526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.056606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.056636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.056771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.056900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.056928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.944 [2024-11-17 19:39:08.057035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.057189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.944 [2024-11-17 19:39:08.057217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.944 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.057324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.057416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.057442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.057579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.057657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.057692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.057787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.057881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.057910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.058050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.058184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.058210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.058343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.058439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.058467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.058593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.058760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.058786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.058925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.059035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.059062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.059196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.059285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.059318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.059412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.059562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.059590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.059700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.059810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.059836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.059992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.060089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.060118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.060211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.060306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.060334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.060474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.060598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.060623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.060731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.060856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.060884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.061037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.061130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.061158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.061266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.061349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.061374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.061485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.061621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.061662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.061803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.061915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.061942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.062032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.062117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.062142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.062269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.062365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.062394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.062542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.062659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.062696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.062799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.062886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.062913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.062992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.063097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.063124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.063224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.063381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.063409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.063544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.063657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.063697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.063807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.063974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.064000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.064074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.064160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.064185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.064273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.064385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.064411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.945 qpair failed and we were unable to recover it. 00:30:09.945 [2024-11-17 19:39:08.064524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.945 [2024-11-17 19:39:08.064645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.064681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.064813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.064931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.064959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.065068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.065222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.065248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.065382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.065505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.065546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.065636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.065750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.065792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.065927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.066038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.066063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.066192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.066349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.066374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.066486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.066569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.066594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.066736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.066845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.066871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.066977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.067106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.067131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.067274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.067405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.067433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.067544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.067654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.067693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.067807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.067903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.067931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.068048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.068139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.068168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.068286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.068394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.068420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.068512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.068627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.068653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.068804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.068946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.068984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.069130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.069226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.069261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.069380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.069528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.069558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.069707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.069799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.069826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.069972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.070048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.070073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.070151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.070266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.070294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.070445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.070537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.070566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.070680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.070765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.070789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.070868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.071004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.071031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.071153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.071246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.071273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.071402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.071540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.071564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.071727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.071857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.071886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.946 [2024-11-17 19:39:08.072008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.072092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.946 [2024-11-17 19:39:08.072135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.946 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.072244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.072325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.072349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.072463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.072542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.072566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.072679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.072820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.072844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.072935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.073042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.073066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.073171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.073278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.073303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.073420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.073508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.073550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.073710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.073824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.073849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.073926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.074044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.074068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.074179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.074339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.074366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.074471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.074555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.074579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.074695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.074803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.074844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.075007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.075129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.075169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.075252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.075392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.075417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.075554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.075703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.075731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.075825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.075938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.075979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.076060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.076171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.076196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.076280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.076393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.076418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.076533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.076641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.076668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.076779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.076862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.076886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.076991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.077094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.077118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.077269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.077417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.077444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.077613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.077698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.077724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.077830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.077916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.077940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.078054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.078175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.078202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.078334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.078475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.078499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.078633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.078741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.078768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.078915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.079059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.079087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.079219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.079327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.079352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.947 qpair failed and we were unable to recover it. 00:30:09.947 [2024-11-17 19:39:08.079487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.947 [2024-11-17 19:39:08.079609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.079636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.079743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.079864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.079892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.080054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.080141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.080166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.080293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.080437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.080462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.080602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.080725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.080753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.080861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.080975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.080999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.081081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.081217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.081245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.081374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.081498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.081524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.081653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.081817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.081843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.081953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.082205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.082428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.082618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.082851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.082979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.083113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.083190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.083215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.083353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.083503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.083531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.083620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.083720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.083751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.083921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.084040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.084065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.084194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.084312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.084340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.084433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.084555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.084583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.084686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.084767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.084791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.084929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.085020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.085065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.085164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.085285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.085312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.085449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.085570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.085595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.085685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.085769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.085810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.085931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.086057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.086086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.086201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.086316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.086341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.086452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.086545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.086569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.086697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.086824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.948 [2024-11-17 19:39:08.086852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.948 qpair failed and we were unable to recover it. 00:30:09.948 [2024-11-17 19:39:08.086994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.087131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.087156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.087259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.087383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.087410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.087537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.087632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.087661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.087781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.087867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.087891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.087975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.088094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.088118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.088285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.088386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.088412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.088543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.088683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.088709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.088876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.088963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.088987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.089104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.089258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.089286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.089383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.089495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.089520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.089608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.089734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.089762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.089928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.090015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.090040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.090153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.090242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.090267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.090373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.090500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.090527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.090661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.090795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.090820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.090902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.091004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.091029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.091165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.091283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.091312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.091435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.091555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.091582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.091734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.091816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.091840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.091930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.092063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.092091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.092241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.092330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.092371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.092455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.092591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.092617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.949 qpair failed and we were unable to recover it. 00:30:09.949 [2024-11-17 19:39:08.092773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.949 [2024-11-17 19:39:08.092896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.092925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.093018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.093122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.093148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.093263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.093342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.093372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.093487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.093618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.093647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.093803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.093948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.093975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.094112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.094225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.094249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.094360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.094507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.094532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.094648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.094763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.094788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.094924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.095030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.095054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.095141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.095254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.095279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.095368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.095504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.095533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.095666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.095790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.095816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.095948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.096067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.096100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.096185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.096307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.096335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.096466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.096581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.096606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.096726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.096865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.096890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.097009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.097089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.097135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.097275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.097391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.097417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.097524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.097635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.097660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.097756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.097881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.097906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.097996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.098079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.098104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.098225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.098348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.098391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.098510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.098621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.098649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.098741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.098849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.098874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.098961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.099048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.099072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.099225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.099339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.099367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.099463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.099582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.099610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.099744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.099884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.950 [2024-11-17 19:39:08.099909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.950 qpair failed and we were unable to recover it. 00:30:09.950 [2024-11-17 19:39:08.100038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.100165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.100192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.100305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.100419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.100444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.100571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.100691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.100720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.100873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.101025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.101067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.101148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.101288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.101313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.101456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.101561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.101588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.101713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.101809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.101837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.101938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.102051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.102076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.102205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.102328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.102355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.102499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.102614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.102642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.102801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.102940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.102964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.103119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.103237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.103264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.103359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.103444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.103471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.103599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.103710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.103736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.103881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.104016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.104041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.104197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.104306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.104332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.104438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.104548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.104573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.104700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.104799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.104826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.104954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.105048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.105075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.105183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.105290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.105314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.105447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.105569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.105597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.105722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.105846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.105874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.106008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.106108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.106132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.106237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.106391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.106418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.106539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.106630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.106658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.106835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.106959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.106984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.107088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.107176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.107203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.107341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.107455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.107479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.951 qpair failed and we were unable to recover it. 00:30:09.951 [2024-11-17 19:39:08.107617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.107700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.951 [2024-11-17 19:39:08.107725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.107887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.108022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.108047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.108161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.108269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.108312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.108441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.108553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.108580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.108697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.108829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.108855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.108985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.109102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.109129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.109260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.109377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.109403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.109545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.109666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.109700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.109811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.109928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.109952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.110086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.110203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.110227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.110332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.110456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.110483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.110616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.110702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.110730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.110864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.110954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.110978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.111090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.111201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.111227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.111388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.111513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.111541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.111654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.111744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.111770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.111884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.112052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.112080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.112204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.112323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.112350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.112480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.112594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.112620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.112789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.112892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.112919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.113043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.113138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.113167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.113269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.113381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.113406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.113494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.113639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.113664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.113828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.113960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.113987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.114122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.114226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.114250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.114378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.114463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.114490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.114574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.114682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.114710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.114847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.114938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.114961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.115051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.115143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.115168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.952 qpair failed and we were unable to recover it. 00:30:09.952 [2024-11-17 19:39:08.115283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.952 [2024-11-17 19:39:08.115414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.115441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.115569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.115707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.115734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.115872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.116025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.116053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.116213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.116324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.116348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.116442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.116530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.116554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.116639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.116753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.116795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.116944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.117068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.117096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.117194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.117328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.117353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.117519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.117642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.117669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.117778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.117862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.117890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.118027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.118142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.118166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.118332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.118456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.118484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.118603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.118695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.118726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.118836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.118957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.118981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.119095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.119201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.119226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.119310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.119417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.119441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.119526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.119642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.119667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.119819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.119954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.119981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.120112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.120211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.120239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.120378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.120478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.120503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.120699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.120791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.120816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.120939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.121045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.121070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.121184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.121291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.121316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.121450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.121575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.121603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.953 qpair failed and we were unable to recover it. 00:30:09.953 [2024-11-17 19:39:08.121750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.121870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.953 [2024-11-17 19:39:08.121898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.122038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.122153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.122178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.122286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.122362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.122386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.122489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.122647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.122684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.122821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.122963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.122988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.123126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.123215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.123242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.123365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.123510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.123536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.123683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.123769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.123794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.123958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.124077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.124103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.124216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.124321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.124346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.124453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.124560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.124585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.124731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.124840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.124869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.124992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.125110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.125138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.125270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.125352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.125377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.125464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.125612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.125638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.125765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.125922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.125954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.126061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.126143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.126168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.126271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.126375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.126402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.126518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.126639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.126669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.126816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.126928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.126954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.127101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.127249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.127274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.127425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.127548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.127575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.127713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.127833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.127859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.127972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.128082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.128108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.128183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.128301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.128327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.128440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.128544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.128568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.128702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.128833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.128859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.954 qpair failed and we were unable to recover it. 00:30:09.954 [2024-11-17 19:39:08.128992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.954 [2024-11-17 19:39:08.129142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.129169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.129280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.129396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.129420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.129543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.129626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.129650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.129798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.129962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.129989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.130124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.130243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.130268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.130380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.130489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.130514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.130597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.130763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.130791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.130903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.131004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.131029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.131145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.131276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.131304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.131442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.131549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.131574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.131682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.131793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.131818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.131925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.132049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.132077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.132232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.132328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.132356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.132492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.132604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.132639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.132816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.132899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.132925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.133028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.133178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.133205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.133305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.133443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.133468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.133575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.133712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.133747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.133871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.133995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.134021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.134129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.134212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.134236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.134354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.134459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.134483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.134590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.134724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.134753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.134890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.135033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.135058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.135143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.135253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.135277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.135387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.135524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.135554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.135689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.135806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.135831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.135960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.136062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.136089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.136221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.136338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.136367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.955 [2024-11-17 19:39:08.136501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.136645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.955 [2024-11-17 19:39:08.136680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.955 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.136789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.136871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.136895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.136999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.137144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.137169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.137241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.137327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.137353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.137493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.137612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.137652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.137755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.137874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.137899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.137990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.138095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.138119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.138223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.138306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.138334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.138452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.138573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.138601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.138763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.138878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.138908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.139083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.139231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.139258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.139377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.139466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.139494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.139601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.139690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.139715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.139877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.140004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.140032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.140152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.140277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.140304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.140411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.140517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.140541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.140651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.140778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.140820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.140910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.140990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.141018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.141119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.141229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.141255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.141337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.141470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.141503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.141601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.141699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.141727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.141889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.141978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.142002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.142105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.142218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.142248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.142373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.142496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.142535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.142659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.142781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.142808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.142937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.143060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.143089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.143213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.143360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.143387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.143546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.143688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.143714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.143800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.143912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.956 [2024-11-17 19:39:08.143936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.956 qpair failed and we were unable to recover it. 00:30:09.956 [2024-11-17 19:39:08.144044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.144161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.144187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.144322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.144402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.144435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.144544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.144680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.144708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.144822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.144918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.144948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.145051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.145139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.145163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.145274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.145393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.145436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.145556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.145684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.145727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.145815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.145901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.145926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.146013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.146116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.146141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.146227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.146342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.146366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.146480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.146618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.146641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.146821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.146948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.146976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.147128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.147273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.147298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.147375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.147494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.147519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.147610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.147717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.147743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.147826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.147968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.147993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.148145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.148281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.148323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.148441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.148591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.148618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.148740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.148829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.148856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.148987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.149074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.149099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.149179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.149312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.149337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.149461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.149622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.149650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.149793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.149931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.149955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.150088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.150179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.150206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.150303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.150393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.150419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.150546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.150658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.150689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.150823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.150921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.150948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.151099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.151217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.151244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.957 [2024-11-17 19:39:08.151416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.151509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.957 [2024-11-17 19:39:08.151533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.957 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.151691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.151818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.151845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.151948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.152070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.152098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.152230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.152311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.152335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.152419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.152581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.152606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.152695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.152806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.152832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.152954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.153069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.153093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.153204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.153323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.153350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.153443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.153567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.153608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.153688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.153773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.153797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.153910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.154038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.154078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.154185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.154264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.154289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.154408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.154516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.154541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.154695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.154846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.154874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.154998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.155113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.155139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.155269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.155404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.155428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.155561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.155688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.155716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.155812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.155935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.155962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.156086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.156222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.156246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.156401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.156525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.156571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.156691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.156776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.156819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.156917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.157022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.157046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.157135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.157223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.157247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.157329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.157523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.157548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.157689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.157801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.157827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.958 [2024-11-17 19:39:08.157917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.158066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.958 [2024-11-17 19:39:08.158093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.958 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.158250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.158344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.158370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.158471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.158580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.158604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.158745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.158870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.158897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.158994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.159074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.159101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.159204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.159322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.159346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.159472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.159593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.159623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.159721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.159846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.159873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.159985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.160092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.160117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.160239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.160366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.160392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.160522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.160648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.160688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:09.959 [2024-11-17 19:39:08.160827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.160954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.959 [2024-11-17 19:39:08.160978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:09.959 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.161116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.161240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.161269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.161376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.161475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.161501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.161634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.161749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.161775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.161910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.161992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.162018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.162171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.162267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.162295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.162431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.162546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.162572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.162711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.162861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.162891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.162994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.163111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.163139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.163270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.163354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.163379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.163498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.163655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.163690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.163795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.163888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.163916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.164043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.164134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.164159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.164314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.164411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.164439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.164567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.164665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.164701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.164817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.164898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.164925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.165039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.165120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.165159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.165280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.165438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.165465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.165570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.165645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.165670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.257 qpair failed and we were unable to recover it. 00:30:10.257 [2024-11-17 19:39:08.165791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.257 [2024-11-17 19:39:08.165877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.165902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.166011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.166115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.166144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.166248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.166328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.166352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.166430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.166546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.166573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.166651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.166768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.166798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.166911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.166985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.167010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.167155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.167268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.167295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.167410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.167492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.167519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.167631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.167751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.167777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.167886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.167977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.168001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.168116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.168242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.168268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.168400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.168490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.168514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.168670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.168777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.168804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.168930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.169029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.169056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.169182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.169292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.169317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.169440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.169590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.169617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.169724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.169814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.169841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.169955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.170132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.170392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.170625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.170823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.170956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.171098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.171210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.171234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.171368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.171477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.171504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.171638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.171776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.171802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.171888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.171976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.172001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.172119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.172196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.172221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.172385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.172474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.172503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.258 qpair failed and we were unable to recover it. 00:30:10.258 [2024-11-17 19:39:08.172628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.172765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-11-17 19:39:08.172791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.172932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.173079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.173120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.173266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.173393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.173421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.173545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.173667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.173706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.173824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.173914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.173940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.174081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.174213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.174240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.174397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.174520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.174547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.174665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.174760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.174786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.174876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.175009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.175037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.175139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.175261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.175287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.175389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.175524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.175553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.175693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.175826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.175853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.175983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.176130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.176155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.176271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.176355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.176380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.176530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.176651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.176704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.176802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.176914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.176942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.177049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.177190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.177215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.177330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.177455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.177482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.177605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.177706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.177734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.177849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.177936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.177961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.178074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.178209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.178241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.178335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.178456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.178483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.178617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.178711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.178736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.178867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.178962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.178989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.179107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.179228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.179255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.179388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.179464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.179487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.179618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.179731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.179764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.179890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.179980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.180008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.259 [2024-11-17 19:39:08.180117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.180203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-11-17 19:39:08.180227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.259 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.180346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.180432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.180457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.180592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.180681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.180741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.180853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.180947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.180971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.181072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.181151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.181175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.181314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.181445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.181472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.181627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.181715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.181739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.181822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.181932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.181958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.182042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.182149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.182174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.182316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.182403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.182429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.182560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.182701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.182731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.182828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.182990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.183018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.183151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.183296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.183324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.183428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.183593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.183634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.183752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.183862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.183886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.183962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.184100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.184123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.184278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.184375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.184402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.184489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.184582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.184610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.184724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.184840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.184865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.184989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.185114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.185142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.185289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.185387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.185415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.185539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.185651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.185681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.185821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.185945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.185974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.186104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.186223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.186250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.186380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.186492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.186517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.186640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.186747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.186776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.186878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.187012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.187039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.187178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.187287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.187312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.260 qpair failed and we were unable to recover it. 00:30:10.260 [2024-11-17 19:39:08.187467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.187585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-11-17 19:39:08.187614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.187766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.187886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.187913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.188074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.188217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.188258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.188384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.188506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.188533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.188631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.188790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.188820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.188931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.189042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.189067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.189209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.189300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.189326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.189416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.189566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.189594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.189717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.189804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.189828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.189913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.190025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.190053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.190156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.190276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.190303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.190433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.190544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.190569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.190685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.190768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.190793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.190881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.190975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.191000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.191092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.191198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.191223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.191307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.191453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.191477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.191604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.191699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.191743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.191837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.191917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.191942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.192101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.192224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.192252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.192377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.192525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.192553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.192659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.192781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.192806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.192942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.193024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.193053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.193172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.193307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.193335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.193439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.193534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.193559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.193641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.193778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.193808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.261 qpair failed and we were unable to recover it. 00:30:10.261 [2024-11-17 19:39:08.193933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-11-17 19:39:08.194064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.194093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.194264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.194353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.194377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.194483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.194572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.194597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.194736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.194827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.194855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.194975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.195089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.195114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.195259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.195407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.195436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.195585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.195729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.195756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.195868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.196007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.196031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.196163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.196313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.196340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.196459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.196611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.196638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.196764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.196850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.196877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.196993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.197081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.197105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.197244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.197331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.197356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.197463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.197542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.197567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.197697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.197784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.197812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.197904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.198052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.198079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.198210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.198323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.198347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.198485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.198584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.198611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.198718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.198810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.262 [2024-11-17 19:39:08.198838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.262 qpair failed and we were unable to recover it. 00:30:10.262 [2024-11-17 19:39:08.198981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.199117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.199142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.199289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.199389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.199418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.199542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.199631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.199659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.199799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.199882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.199907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.199995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.200132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.200162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.200308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.200457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.200485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.200647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.200734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.200759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.200891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.201041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.201068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.201162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.201246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.201274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.201439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.201575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.201600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.201696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.201832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.201857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.201946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.202075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.202103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.202231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.202322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.202348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.202439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.202596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.202624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.202753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.202845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.202873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.202978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.203119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.203144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.203274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.203399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.203427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.203545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.203666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.203705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.203802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.203890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.203917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.204006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.204180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.204205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.204316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.204396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.204421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.204563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.204687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.204713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.204798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.204911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.204956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.205049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.205175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.205203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.205362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.205476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.205501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.205592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.205691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.205720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.205878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.206016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.206041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.263 [2024-11-17 19:39:08.206179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.206288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.263 [2024-11-17 19:39:08.206312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.263 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.206435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.206530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.206574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.206654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.206748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.206774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.206923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.206995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.207020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.207180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.207333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.207360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.207444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.207533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.207572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.207691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.207776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.207802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.207888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.208040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.208066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.208246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.208368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.208396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.208523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.208659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.208691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.208830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.208952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.208980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.209096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.209216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.209243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.209372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.209453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.209478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.209588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.209720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.209749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.209871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.209970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.209999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.210120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.210259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.210284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.210414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.210532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.210559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.210687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.210779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.210805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.210915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.211023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.211048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.211130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.211300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.211328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.211448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.211540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.211564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.211679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.211790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.211815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.211925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.212093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.212118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.212204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.212315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.212358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.212458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.212572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.212596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.212700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.212788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.212815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.212907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.213031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.213060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.213192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.213272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.213296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.264 [2024-11-17 19:39:08.213422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.213512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.264 [2024-11-17 19:39:08.213540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.264 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.213663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.213822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.213850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.213991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.214098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.214122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.214234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.214355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.214398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.214525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.214621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.214647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.214752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.214839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.214865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.214960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.215098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.215126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.215221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.215335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.215379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.215500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.215611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.215637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.215744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.215862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.215887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.215993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.216110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.216135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.216248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.216318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.216342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.216426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.216505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.216529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.216624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.216763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.216791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.216921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.217031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.217056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.217155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.217276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.217302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.217453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.217549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.217581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.217717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.217807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.217832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.217937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.218067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.218095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.218192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.218313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.218339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.218450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.218557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.218582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.218710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.218855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.218881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.219002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.219168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.219192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.219281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.219392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.219416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.219487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.219567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.219592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.219672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.219817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.219842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.219931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.220008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.220037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.220153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.220284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.220312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.220411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.220562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.220591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.265 qpair failed and we were unable to recover it. 00:30:10.265 [2024-11-17 19:39:08.220724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.265 [2024-11-17 19:39:08.220814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.220838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.220968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.221085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.221112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.221215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.221310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.221337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.221462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.221603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.221628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.221745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.221851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.221875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.222006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.222102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.222131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.222256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.222346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.222371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.222458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.222548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.222578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.222705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.222832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.222857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.222969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.223053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.223078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.223170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.223292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.223322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.223471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.223559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.223588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.223711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.223837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.223863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.223989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.224108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.224135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.224235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.224359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.224386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.224482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.224557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.224582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.224710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.224861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.224889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.225014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.225176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.225202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.225340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.225451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.225476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.225558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.225641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.225665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.225770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.225876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.225917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.226034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.226145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.226169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.226250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.226342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.226370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.226495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.226589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.226617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.266 qpair failed and we were unable to recover it. 00:30:10.266 [2024-11-17 19:39:08.226749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.226859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.266 [2024-11-17 19:39:08.226885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.226975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.227085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.227110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.227201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.227305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.227329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.227417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.227499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.227524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.227617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.227707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.227742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.227853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.228014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.228044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.228158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.228268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.228292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.228424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.228540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.228581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.228681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.228823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.228848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.228934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.229045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.229072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.229236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.229359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.229386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.229513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.229603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.229632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.229758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.229839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.229863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.229947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.230059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.230083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.230174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.230309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.230333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.230451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.230528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.230570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.230697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.230850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.230875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.230959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.231140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.231167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.231274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.231388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.231413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.231540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.231657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.231693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.231793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.231917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.231944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.232081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.232164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.232188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.267 [2024-11-17 19:39:08.232299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.232379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.267 [2024-11-17 19:39:08.232403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.267 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.232481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.232609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.232637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.232792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.232879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.232905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.233035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.233149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.233175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.233261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.233353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.233394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.233478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.233590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.233613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.233725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.233870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.233896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.234001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.234150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.234177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.234292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.234407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.234431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.234584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.234733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.234762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.234886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.234991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.235015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.235105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.235187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.235211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.235326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.235474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.235501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.235600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.235730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.235758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.235882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.235970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.235995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.236089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.236197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.236222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.236312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.236406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.236435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.236537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.236650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.236680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.236765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.236872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.236896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.237052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.237195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.237222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.237356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.237464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.237488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.237610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.237704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.237729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.237850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.237957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.237981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.238068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.238151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.238177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.238288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.238423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.238451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.238574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.238700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.238729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.238867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.238984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.239009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.239141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.239265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.239293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.268 qpair failed and we were unable to recover it. 00:30:10.268 [2024-11-17 19:39:08.239385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.268 [2024-11-17 19:39:08.239508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.239537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.239652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.239768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.239793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.239899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.240024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.240051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.240148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.240235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.240264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.240380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.240501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.240527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.240606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.240738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.240767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.240917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.241036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.241064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.241163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.241247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.241271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.241411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.241544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.241572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.241685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.241810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.241838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.241971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.242102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.242128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.242259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.242381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.242410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.242509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.242635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.242663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.242786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.242912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.242937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.243054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.243134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.243177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.243300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.243447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.243476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.243610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.243725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.243752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.243861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.244011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.244039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.244162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.244279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.244306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.244450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.244528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.244554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.244642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.244731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.244773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.244895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.245016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.245044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.245203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.245304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.245328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.245475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.245574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.245603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-11-17 19:39:08.245746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.245829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-11-17 19:39:08.245854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.246001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.246108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.246134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.246268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.246370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.246397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.246530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.246635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.246659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.246808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.246913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.246938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.247045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.247165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.247195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.247344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.247428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.247457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.247685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.247795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.247820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.247900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.248051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.248079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.248197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.248291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.248320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.248478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.248590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.248615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.248787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.248874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.248902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.249023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.249135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.249162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.249298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.249379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.249403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.249564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.249646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.249682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.249781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.249862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.249889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.250018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.250126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.250150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.250236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.250328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.250356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.250458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.250607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.250635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.250774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.250855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.250880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.250987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.251142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.251170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.251266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.251428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.251453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.251536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.251628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.251653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.251772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.251881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.251907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.252032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.252157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.252185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.252291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.252401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.252425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.252550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.252706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.252735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.252822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.252906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.252934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-11-17 19:39:08.253075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-11-17 19:39:08.253184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.253208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.253308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.253457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.253484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.253617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.253758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.253783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.253867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.253969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.253994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.254121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.254199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.254228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.254333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.254451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.254480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.254601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.254690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.254716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.254806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.254920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.254945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.255054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.255167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.255192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.255263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.255374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.255398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.255481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.255617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.255659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.255799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.255944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.255972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.256106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.256219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.256250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.256423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.256542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.256570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.256700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.256802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.256827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.256920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.257018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.257045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.257175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.257293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.257321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.257448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.257536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.257564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.257705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.257822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.257847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.257954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.258099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.258126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.258247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.258338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.258367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.258517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.258597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.258621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.258706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.258797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.258825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.258927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.259002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.259028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-11-17 19:39:08.259110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-11-17 19:39:08.259225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.259250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.259376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.259459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.259486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.259634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.259721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.259750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.259907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.260021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.260046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.260181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.260329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.260356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.260488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.260609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.260637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.260822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.260907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.260933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.261042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.261173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.261198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.261332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.261464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.261497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.261636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.261737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.261763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.261878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.261960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.261987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.262110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.262236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.262264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.262374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.262508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.262534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.262659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.262789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.262814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.262926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.263044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.263071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.263177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.263262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.263287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.263398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.263511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.263552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.263734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.263853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.263878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.264017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.264129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.264159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.264259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.264409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.264435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.264529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.264620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.264649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.264769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.264916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.264941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.265084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.265202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.265229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.265323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.265440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.265468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-11-17 19:39:08.265584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.265727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-11-17 19:39:08.265753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.265849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.266156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.266401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.266643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.266881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.266998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.267109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.267249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.267273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.267352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.267481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.267508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.267594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.267719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.267748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.267856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.268011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.268036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.268123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.268215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.268240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.268372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.268462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.268493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.268636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.268736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.268762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.268849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.269007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.269035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.269129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.269232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.269260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.269369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.269505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.269530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.269659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.269764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.269793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.269877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.269975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.270003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.270104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.270224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.270248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.270364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.270451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.270476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.270588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.270716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.270745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.270856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.271004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.271029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.271147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.271280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.271305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.271416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.271552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.271576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.271689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.271792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.271816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.271926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.272048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.272076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-11-17 19:39:08.272218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.272340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-11-17 19:39:08.272366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.272475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.272592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.272616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.272731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.272848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.272874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.272972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.273122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.273149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.273288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.273380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.273404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.273510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.273617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.273641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.273827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.273913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.273939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.274066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.274203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.274228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.274400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.274514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.274538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.274647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.274771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.274796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.274873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.274963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.274988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.275097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.275252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.275275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.275439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.275539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.275565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.275742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.275854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.275880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.276024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.276142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.276171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.276298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.276450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.276477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.276616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.276698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.276724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.276816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.276934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.276962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.277079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.277175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.277203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.277318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.277415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.277439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.277521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.277656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.277690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.277825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.277946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.277973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.278128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.278239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.278263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.278425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.278544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.278571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.278690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.278787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.278814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.278951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.279092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.279116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-11-17 19:39:08.279292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-11-17 19:39:08.279403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.279428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.279539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.279670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.279708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.279817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.279956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.279981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.280115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.280242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.280271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.280388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.280509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.280550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.280708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.280822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.280846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.280931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.281059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.281101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.281193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.281296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.281324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.281462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.281545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.281571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.281713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.281823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.281847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.281996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.282101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.282125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.282243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.282329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.282353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.282442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.282557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.282596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.282698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.282832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.282856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.282976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.283120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.283144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.283251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.283392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.283419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.283568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.283688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.283716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.283826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.283901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.283926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.284036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.284126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.284150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.284255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.284370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.284394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.284474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.284590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.284615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.284759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.284880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.284907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.285003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.285089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.285116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.285229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.285329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.285353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.285439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.285518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.285542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.285636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.285760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.285802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.285914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.286055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.286080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.286207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.286326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.286353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-11-17 19:39:08.286477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-11-17 19:39:08.286578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.286605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.286712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.286817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.286841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.286954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.287054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.287080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.287171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.287285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.287310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.287424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.287503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.287528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.287613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.287759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.287784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.287881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.288003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.288046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.288162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.288297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.288322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.288461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.288609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.288637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.288817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.288952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.288980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.289090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.289200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.289225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.289328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.289456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.289483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.289609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.289736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.289764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.289926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.290004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.290028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.290141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.290251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.290277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.290388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.290498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.290523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.290631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.290750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.290775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.290890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.291026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.291050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.291161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.291283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.291309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.291420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.291508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.291532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.291647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.291802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.291829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.291953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.292100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.292127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.292253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.292342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.292366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.292479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.292565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.292589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-11-17 19:39:08.292725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.292854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-11-17 19:39:08.292882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.292991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.293077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.293102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.293211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.293344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.293371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.293534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.293646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.293669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.293788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.293871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.293895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.293970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.294079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.294103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.294212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.294341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.294368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.294508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.294617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.294642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.294765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.294914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.294939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.295083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.295184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.295213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.295320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.295408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.295448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.295562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.295693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.295742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.295849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.296012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.296040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.296179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.296303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.296328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.296443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.296528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.296554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.296645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.296789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.296820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.296983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.297070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.297094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.297211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.297298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.297325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.297450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.297561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.297605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.297716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.297827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.297852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.297964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.298116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.298141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.298230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.298364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.298397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.298555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.298636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.298660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.298785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.298909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.298937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.299058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.299174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.299202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.299329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.299438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.299463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.299580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.299670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.299721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.299838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.299958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.299986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-11-17 19:39:08.300097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.300181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-11-17 19:39:08.300207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.300315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.300420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.300462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.300564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.300651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.300698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.300846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.300925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.300955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.301075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.301232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.301272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.301377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.301451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.301476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.301626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.301701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.301731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.301854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.301983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.302012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.302135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.302248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.302276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.302428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.302545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.302570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.302680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.302805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.302833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.302964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.303047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.303072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.303151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.303266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.303290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.303407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.303537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.303578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.303703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.303830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.303872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.303992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.304071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.304095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.304207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.304361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.304389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.304543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.304632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.304684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.304784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.304871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.304896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.304981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.305107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.305134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.305270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.305380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.305405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.305482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.305589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.305613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.305726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.305859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.305890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.306050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.306182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.306237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.306360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.306478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.306506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.306601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.306732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.306794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.306973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.307108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.307140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.307283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.307412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.307437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-11-17 19:39:08.307611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.307739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-11-17 19:39:08.307779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.307881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.308008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.308037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.308173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.308265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.308292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.308451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.308552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.308582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.308704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.308823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.308851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.308947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.309091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.309116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.309254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.309343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.309372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.309496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.309593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.309620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.309734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.309857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.309882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.310019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.310115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.310161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.310274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.310356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.310381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.310502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.310592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.310617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.310721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.310841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.310869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.310977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.311101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.311131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.311236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.311321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.311347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.311461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.311540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.311565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.311685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.311796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.311825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.311960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.312099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.312125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.312268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.312406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.312431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.312685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.312773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.312800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.312931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.313039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.313063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.313205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.313287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.313314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.313409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.313524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.313550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.313700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.313846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.313870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.314013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.314150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.314178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.314273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.314379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.314419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.314577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.314703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.314730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-11-17 19:39:08.314840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-11-17 19:39:08.314935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.314960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.315088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.315224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.315249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.315353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.315441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.315466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.315580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.315694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.315738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.315837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.315991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.316019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.316150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.316246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.316270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.316355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.316470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.316495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.316574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.316689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.316716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.316819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.316931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.316956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.317069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.317153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.317179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.317260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.317364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.317391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.317518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.317594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.317619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.317704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.317816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.317861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.317952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.318039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.318064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.318154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.318242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.318269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.318403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.318497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.318525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.318615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.318724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.318749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.318835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.318983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.319007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.319152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.319268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.319293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.319388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.319541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.319568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.319686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.319774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.319799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.319890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.319990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.320017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.320170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.320295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.320323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.320436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.320549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.320575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.320657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.320798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.320839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.320953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.321086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.321114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.321222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.321310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-11-17 19:39:08.321335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-11-17 19:39:08.321448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.321562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.321589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.321695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.321827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.321855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.321974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.322086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.322111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.322212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.322334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.322360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.322485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.322608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.322636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.322782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.322895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.322919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.323077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.323228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.323256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.323373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.323464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.323492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.323603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.323689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.323716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.323832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.323945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.323988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.324138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.324259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.324288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.324393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.324506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.324532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.324623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.324769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.324796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.324916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.325074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.325102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.325201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.325313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.325338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.325475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.325561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.325589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.325686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.325803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.325831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-11-17 19:39:08.325969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.326079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-11-17 19:39:08.326103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.326203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.326302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.326330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.326414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.326544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.326588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.326715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.326827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.326852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.326999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.327085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.327112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.327269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.327387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.327414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.327525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.327643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.327667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.327768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.327852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.327877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.327986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.328106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.328134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.328253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.328341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.328366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.328457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.328567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.328609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.328699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.328823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.328853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.328967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.329109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.329135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-11-17 19:39:08.329225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-11-17 19:39:08.329333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.329357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.329445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.329525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.329566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.329705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.329806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.329833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.329918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.330041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.330069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.330205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.330358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.330384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.330472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.330578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.330603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.330710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.330826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.330855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.330979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.331095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.331137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.331246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.331340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.331366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.331450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.331564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.331588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.331699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.331851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.331880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.331983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.332101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.332126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.332227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.332334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.332362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.332460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.332593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.332621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.332757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.332872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.332898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.332989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.333104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.333146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.333303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.333408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.333436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.333544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.333642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.333667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-11-17 19:39:08.333764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.333924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-11-17 19:39:08.333952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.334048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.334182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.334208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.334296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.334436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.334461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.334571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.334703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.334745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.334828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.334921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.334946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.335057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.335169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.335194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.335302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.335400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.335428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.335545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.335681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.335707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.335790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.335931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.335957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.336058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.336178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.336206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.336334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.336420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.336448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.336554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.336634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.336660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.336785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.336925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.336955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.337084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.337172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.337199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.337360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.337456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.337482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.337605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.337699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.337727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.337823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.337949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.337977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.338074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.338182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.338207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.338316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.338464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.338492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.338613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.338712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.338742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.338871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.338982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.339007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.339137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.339290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.339332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.339446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.339576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.339603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.339762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.339847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-11-17 19:39:08.339872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-11-17 19:39:08.340027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.340150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.340183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.340322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.340409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.340433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.340550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.340637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.340662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.340754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.340890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.340919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.341043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.341157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.341184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.341285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.341372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.341396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.341475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.341584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.341609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.341752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.341858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.341886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.342022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.342111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.342136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.342215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.342332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.342357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.342464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.342537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.342566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.342685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.342796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.342821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.342899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.342985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.343010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.343125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.343260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.343287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.343396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.343480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.343504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.343585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.343720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.343756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.343857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.343990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.344014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.344099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.344211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.344237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.344357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.344438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.344464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.344550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.344627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.344652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.344798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.344879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.344908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.345021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.345168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.345195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.345317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.345451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.345476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.345666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.345800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.345825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.345907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.346039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.346067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.346207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.346290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.346316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.346410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.346543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.346569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.346719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.346839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.346867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.346982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.347073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.347102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.347237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.347347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.347372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.347506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.347599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-11-17 19:39:08.347628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-11-17 19:39:08.347807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.347890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.347915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.348028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.348138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.348163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.348298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.348428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.348455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.348553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.348681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.348710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.348838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.348927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.348953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.349080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.349198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.349225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.349376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.349491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.349518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.349651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.349772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.349797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.349880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.349989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.350013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.350143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.350241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.350268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.350420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.350535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.350559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.350663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.350797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.350825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.351016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.351167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.351208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.351339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.351421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.351445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.351548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.351698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.351726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.351853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.351947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.351976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.352140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.352236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.352261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.352400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.352479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.352504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.352587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.352669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.352709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.352823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.352936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.352961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.353051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.353207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.353235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.353359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.353474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.353502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.353604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.353700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.353727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.353819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.353908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.353934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.354077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.354169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.354198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.354301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.354386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.354412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.354547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.354650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.354683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.354885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.355014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.355041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.355145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.355283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.355308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.355390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.355473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.355497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.355637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.355768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.355795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.355882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.356021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.356045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.356172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.356319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.356346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.356503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.356597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.356625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.356743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.356881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.356906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.357038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.357159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.357187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.357285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.357381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.357409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.357542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.357622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.357648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.357807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.357939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.357966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-11-17 19:39:08.358094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.358185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-11-17 19:39:08.358212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.358346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.358456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.358480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.358564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.358698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.358724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.358856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.359025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.359050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.359166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.359271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.359295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.359372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.359480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.359504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.359660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.359767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.359796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.359908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.360025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.360051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.360131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.360240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.360270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.360393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.360488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.360516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.360648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.360765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.360791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.360916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.361074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.361103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.361231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.361360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.361388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.361491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.361603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.361630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.361764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.361864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.361892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.362026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.362140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.362166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-11-17 19:39:08.362287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-11-17 19:39:08.362401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.362427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.362573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.362659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.362692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.362820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.362914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.362941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.363077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.363188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.363214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.363348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.363517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.363545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.363684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.363787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.363813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.363910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.363998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.364024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.364139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.364264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.364292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.364413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.364503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.364531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.364657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.364812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.364855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.364970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.365063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.365093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.365221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.365362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.365387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.365536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.365700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.365731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.365850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.366058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.366086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.366177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.366300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.366329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.366446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.366586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.366612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.366726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.366850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.366879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.367049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.367191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.367216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.367328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.367409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.367434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.367524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.367666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.367700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.367818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.367915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.367944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.368063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.368178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.368206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.368320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.368469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.368497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.368651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.368824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.368850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.368978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.369116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.369141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.369217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.369358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.369386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.369480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.369575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.369604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.369744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.369824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.369849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.369974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.370108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.370133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.370253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.370341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.370365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.288 qpair failed and we were unable to recover it. 00:30:10.288 [2024-11-17 19:39:08.370500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.370621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.288 [2024-11-17 19:39:08.370648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.370767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.370885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.370910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.371027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.371141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.371166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.371275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.371363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.371387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.371501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.371626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.371654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.371774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.371892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.371917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.372036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.372122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.372146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.372228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.372353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.372382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.372476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.372590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.372619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.372753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.372837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.372863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.373017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.373139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.373167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.373261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.373400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.373428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.373595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.373706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.373732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.373841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.373932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.373960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.374089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.374233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.374260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.374373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.374467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.374491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.374582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.374663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.374705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.374817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.374945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.374973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.375085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.375176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.375202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.375327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.375417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.375442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.375532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.375648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.375679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.375774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.375861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.375888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.376057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.376153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.376180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.376269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.376424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.376453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.376594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.376688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.376713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.376852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.377136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.377391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.377588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.377815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.377981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.378059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.378163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.378188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.378318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.378436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.378465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.378591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.378688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.378718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.289 qpair failed and we were unable to recover it. 00:30:10.289 [2024-11-17 19:39:08.378846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.289 [2024-11-17 19:39:08.378958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.378983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.379086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.379238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.379267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.379363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.379510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.379542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.379639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.379783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.379808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.379928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.380064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.380092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.380242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.380374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.380404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.380539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.380629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.380655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.380806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.380928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.380972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.381058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.381144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.381169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.381246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.381367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.381393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.381477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.381591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.381615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.381749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.381841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.381869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.382003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.382090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.382120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.382207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.382337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.382380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.382511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.382604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.382631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.382766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.382844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.382869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.382953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.383183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.383403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.383595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.383848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.383973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.384100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.384177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.384202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.384284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.384421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.384454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.384609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.384712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.384740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.384848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.384933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.384958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.385041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.385123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.385147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.385234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.385372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.385416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.385525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.385614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.385639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.385745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.385826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.385851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.385987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.386074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.386103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.386208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.386345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.386370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.386467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.386580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.386607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.386725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.386853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.386886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.387025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.387114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.387139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.387224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.387332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.387358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.387497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.387615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.387643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.387760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.387876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.387900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.387991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.388083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.388112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.388195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.388344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.388373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.388483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.388589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.388634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.388797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.388880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.388906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.290 [2024-11-17 19:39:08.389055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.389148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.290 [2024-11-17 19:39:08.389176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.290 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.389305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.389385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.389410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.389505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.389606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.389633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.389769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.389871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.389899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.390062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.390140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.390166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.390254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.390405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.390432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.390601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.390707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.390733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.390846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.390930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.390955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.391087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.391183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.391211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.391334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.391454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.391481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.391588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.391699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.391725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.391835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.391943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.391967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.392103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.392250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.392278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.392384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.392500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.392525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.392629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.392749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.392778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.392869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.392962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.392989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.393118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.393215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.393240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.393331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.393441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.393466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.393607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.393710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.393739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.393847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.393988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.394013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.394112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.394268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.394309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.394424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.394534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.394559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.394682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.394772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.394797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.394908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.395017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.395043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.395175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.395295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.395324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.395462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.395548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.395573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.395697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.395793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.395821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.395936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.396018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.396046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.396145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.396261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.396287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.396405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.396544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.396573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.291 qpair failed and we were unable to recover it. 00:30:10.291 [2024-11-17 19:39:08.396696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.396816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.291 [2024-11-17 19:39:08.396843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.396976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.397090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.397114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.397219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.397342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.397371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.397495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.397587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.397614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.397726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.397811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.397835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.397915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.397994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.398018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.398108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.398215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.398240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.398384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.398469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.398493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.398622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.398744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.398773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.398867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.399005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.399036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.399179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.399268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.399293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.399447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.399545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.399573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.399666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.399799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.399828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.399939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.400052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.400076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.400211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.400362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.400390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.400545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.400672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.400724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.400844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.400926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.400952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.401067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.401178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.401221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.401347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.401464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.401492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.401623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.401722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.401747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.401843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.401932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.401974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.402096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.402217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.402246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.402392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.402479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.402505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.402595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.402695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.402724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.402871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.402963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.402992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.403119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.403203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.403228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.403311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.403417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.403446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.403540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.403636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.403660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.403752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.403865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.403892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.404052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.404204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.404232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.404325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.404413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.404442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.404550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.404638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.404664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.404807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.404914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.404943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.405038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.405161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.405189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.405290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.405406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.405431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.405524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.405633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.405659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.405749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.405881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.405908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.406025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.406109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.406135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.406224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.406318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.406346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.406508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.406593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.406620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.406733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.406843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.292 [2024-11-17 19:39:08.406869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.292 qpair failed and we were unable to recover it. 00:30:10.292 [2024-11-17 19:39:08.406950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.407078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.407108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.407224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.407327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.407358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.407468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.407604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.407631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.407808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.407926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.407951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.408046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.408244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.408299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.408490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.408637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.408684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.408847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.408974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.409028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.409174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.409279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.409319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.409454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.409580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.409607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.409730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.409835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.409864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.409968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.410091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.410119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.410249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.410337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.410362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.410474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.410554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.410579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.410711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.410871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.410911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.411050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.411181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.411216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.411374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.411547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.411586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.411743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.411849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.411889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.412018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.412107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.412149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.412282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.412385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.412415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.412532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.412650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.412685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.412785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.412905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.412932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.413028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.413130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.413158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.413274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.413365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.413393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.413515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.413620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.413671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.413836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.413950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.413985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.414184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.414285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.414321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.414459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.414619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.414657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.414812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.414908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.414949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.415078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.415179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.415207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.415310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.415443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.415469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.415556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.415669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.415711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.415859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.416152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.416364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.416628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.416824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.293 [2024-11-17 19:39:08.416961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.293 qpair failed and we were unable to recover it. 00:30:10.293 [2024-11-17 19:39:08.417039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.417125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.417151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.417245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.417402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.417430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.417539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.417646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.417680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.417785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.417878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.417903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.418016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.418169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.418197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.418311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.418430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.418455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.418545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.418687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.418713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.418800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.418886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.418910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.418999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.419138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.419164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.419269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.419388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.419417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.419537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.419623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.419652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.419786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.419876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.419901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.419990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.420153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.420182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.420302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.420456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.420484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.420588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.420703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.420730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.420862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.420984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.421017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.421115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.421233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.421262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.421367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.421443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.421468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.421574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.421753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.421780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.421861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.422052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.422081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.422242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.422331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.422357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.422431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.422543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.294 [2024-11-17 19:39:08.422568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.294 qpair failed and we were unable to recover it. 00:30:10.294 [2024-11-17 19:39:08.422647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.422777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.422806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-11-17 19:39:08.422951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-11-17 19:39:08.423140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-11-17 19:39:08.423338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-11-17 19:39:08.423619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-11-17 19:39:08.423810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.423920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-11-17 19:39:08.423998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.424088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.424115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-11-17 19:39:08.424232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.424307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.424333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.295 qpair failed and we were unable to recover it. 00:30:10.295 [2024-11-17 19:39:08.424461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.295 [2024-11-17 19:39:08.424589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.424617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.424749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.424860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.424886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.424996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.425080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.425105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.425233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.425358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.425385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.425478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.425598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.425626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.425735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.425842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.425871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.425979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.426102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.426129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.426227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.426347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.426377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.426482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.426592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.426618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.426704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.426817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.426846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.426990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.427068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.427094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.427235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.427322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.427349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.427469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.427601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.427628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.427721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.427855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.427884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.427990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.428066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.428091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.428200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.428303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.428336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.428428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.428580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.428608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.428712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.428787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.428813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.428905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.428982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.429007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.429091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.429196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.429222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.429338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.429444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.429470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.429570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.429665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.429699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.429782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.429887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.429913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.430027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.430171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.430196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.430303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.430454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.430483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.430603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.430699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.430729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.296 [2024-11-17 19:39:08.430867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.431006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.296 [2024-11-17 19:39:08.431032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.296 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.431176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.431324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.431354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.431506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.431624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.431650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.431737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.431850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.431877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.431960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.432073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.432115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.432207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.432299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.432330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.432483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.432603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.432633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.432749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.432855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.432882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.433021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.433118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.433147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.433282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.433394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.433420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.433563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.433663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.433701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.433855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.433947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.433976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.434077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.434192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.434217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.434332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.434443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.434486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.434621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.434713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.434749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.434862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.434971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.434997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.435140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.435222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.435249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.435334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.435488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.435518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.435626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.435717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.435744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.435858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.435961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.435993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.436148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.436296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.436326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.436446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.436534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.436560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.436672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.436789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.436834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.436917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.437024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.437050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.437202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.437284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.437312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.437420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.437508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.437537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.437653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.437786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.437815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.437925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.438011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.438037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.438121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.438228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.438253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.438343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.438516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.438542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.297 qpair failed and we were unable to recover it. 00:30:10.297 [2024-11-17 19:39:08.438668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.438752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.297 [2024-11-17 19:39:08.438778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.438882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.438975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.439004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.439099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.439223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.439251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.439384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.439475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.439501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.439629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.439720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.439750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.439874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.440002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.440030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.440165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.440244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.440271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.440400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.440500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.440530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.440617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.440781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.440807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.440921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.441036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.441062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.441170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.441265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.441294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.441407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.441529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.441557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.441665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.441766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.441792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.441902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.442037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.442065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.442162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.442259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.442284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.442373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.442479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.442505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.442610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.442750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.442780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.442931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.443057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.443084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.443192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.443269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.443295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.443370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.443463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.443492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.443649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.443792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.443822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.443950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.444036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.444061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.444182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.444283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.444313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.444410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.444565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.444593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.444755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.444854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.444880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.445005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.445088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.445113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.445222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.445332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.445376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.445514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.445626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.445652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.445792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.445914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.445943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.446117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.446251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.446294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.298 qpair failed and we were unable to recover it. 00:30:10.298 [2024-11-17 19:39:08.446434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.446565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.298 [2024-11-17 19:39:08.446602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.446793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.446924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.446961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.447145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.447290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.447331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.447515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.447618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.447648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.447807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.447940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.448005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.448097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.448189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.448218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.448324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.448438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.448473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.448623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.448827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.448868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.449113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.449243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.449272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.449367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.449481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.449509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0348000b90 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.449691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.449858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.449899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.450060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.450228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.450288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.450471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.450598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.450634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.450793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.450900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.450929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.451078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.451172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.451201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.451340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.451462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.451488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.451616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.451787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.451824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.451933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.452124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.452163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.452296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.452426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.452463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.452596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.452763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.452799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.452966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.453133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.453185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.453295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.453401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.453427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.453544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.453626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.453652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.453789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.453929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.453978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.454083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.454218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.454255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.454420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.454534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.454573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.454743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.454911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.454948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.455161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.455291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.455331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.455491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.455613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.455643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.299 [2024-11-17 19:39:08.455770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.455882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.299 [2024-11-17 19:39:08.455909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.299 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.456002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.456089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.456130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.456255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.456403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.456444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.456596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.456707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.456772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.456905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.457013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.457049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.457205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.457377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.457418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.457561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.457689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.457719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.457836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.457946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.457981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.458141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.458264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.458293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.458414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.458539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.458569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.458698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.458791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.458818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.458979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.459105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.459146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.459305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.459473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.459512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.459670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.459806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.459842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.460010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.460177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.460217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.460359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.460501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.460528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.300 qpair failed and we were unable to recover it. 00:30:10.300 [2024-11-17 19:39:08.460617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.300 [2024-11-17 19:39:08.460731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.460758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.460870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.461033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.461059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.461198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.461355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.461391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.461577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.461778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.461815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.461929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.462064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.462100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.462273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.462406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.462438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.462576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.462690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.462717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.462865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.462957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.462982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.463066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.463226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.463254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.463361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.463480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.463515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.463667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.463798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.463840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.463962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.464111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.464151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.464291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.464449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.464484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.464636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.464782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.464824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.464990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.465118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.465148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.465289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.465378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.465404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.465537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.465656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.465694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.465820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.465927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.465953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.466097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.466179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.466205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.466336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.466486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.466525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.466684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.466831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.466870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.467029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.467205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.467246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.467366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.467507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.467546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.467725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.467890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.467920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.468055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.468146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.468184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.301 [2024-11-17 19:39:08.468335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.468483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.301 [2024-11-17 19:39:08.468512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.301 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.468605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.468732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.468767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.468883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.468971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.468998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.469114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.469242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.469280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.469458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.469629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.469668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.469834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.469946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.469982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.470118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.470309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.470345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.470474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.470587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.470617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.470735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.470854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.470880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.470969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.471118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.471144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.471225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.471340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.471365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.471475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.471592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.471623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.471712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.471840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.471871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.472019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.472136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.472176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.472372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.472503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.472539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.472707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.472852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.472888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.473033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.473201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.473244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.473419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.473535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.473561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.473655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.473746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.473773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.473902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.474027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.474067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.474229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.474358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.474395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.474542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.474686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.474727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.474843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.475015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.475054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.475237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.475358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.475385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.475505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.475645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.475671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.475809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.475943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.475971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.476115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.476250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.476286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.476416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.476566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.476606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.476750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.476896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.476936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.477089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.477221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.477258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.302 qpair failed and we were unable to recover it. 00:30:10.302 [2024-11-17 19:39:08.477457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.477568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.302 [2024-11-17 19:39:08.477598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.303 qpair failed and we were unable to recover it. 00:30:10.303 [2024-11-17 19:39:08.477699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.477816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.477846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.303 qpair failed and we were unable to recover it. 00:30:10.303 [2024-11-17 19:39:08.478006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.478096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.478132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.303 qpair failed and we were unable to recover it. 00:30:10.303 [2024-11-17 19:39:08.478237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.478348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.478385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.303 qpair failed and we were unable to recover it. 00:30:10.303 [2024-11-17 19:39:08.478489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.478654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.478696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.303 qpair failed and we were unable to recover it. 00:30:10.303 [2024-11-17 19:39:08.478834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.478968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.303 [2024-11-17 19:39:08.479005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.303 qpair failed and we were unable to recover it. 00:30:10.303 [2024-11-17 19:39:08.479111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.479266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.479305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.479470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.479631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.479660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.479793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.479878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.479904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.479981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.480113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.480141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.480232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.480321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.480350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.480550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.480690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.480744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.480882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.481008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.481049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.481229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.481368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.481408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.481591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.481727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.481764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.481882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.482035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.482075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.482184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.482340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.482376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.482521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.482632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.482660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.482811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.482963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.482993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.483090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.483215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.483244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.483371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.483459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.483486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.483571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.483659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.483690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.483772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.483886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.483930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.484091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.484232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.304 [2024-11-17 19:39:08.484268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.304 qpair failed and we were unable to recover it. 00:30:10.304 [2024-11-17 19:39:08.484431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.484613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.484654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.484869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.485028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.485071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.485223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.485314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.485341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.485460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.485550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.485596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.485748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.485884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.485924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.486050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.486213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.486249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.486413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.486564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.486603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.486738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.486845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.486882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.487034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.487160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.487194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.487332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.487466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.487503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.487647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.487770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.487800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.487913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.488204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.488452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.488703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.488897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.488998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.489166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.489306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.489359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.489499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.489630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.489667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.489796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.489951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.489990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.490112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.490220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.490258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-11-17 19:39:08.490448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.581 [2024-11-17 19:39:08.490557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.490583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.490741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.490829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.490854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.490938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.491097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.491137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.491328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.491466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.491521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.491641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.491757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.491798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.491919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.492091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.492129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.492313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.492477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.492507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.492622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.492752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.492783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.492911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.492996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.493025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.493185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.493341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.493394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.493512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.493622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.493661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.493795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.493910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.493948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.494075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.494176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.494211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.494319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.494448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.494485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.494625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.494819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.494859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.495046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.495158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.495191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.495283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.495378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.495404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.495519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.495654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.495694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.495836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.495941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.495967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.496063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.496217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.496245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.496372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.496541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.496591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.496702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.496808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.496842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.496961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.497137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.497177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.497325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.497500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.497542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.497683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.497816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.497851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.498009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.498122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.498163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.498290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.498388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.498418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.498528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.498640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.498666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.498768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.498847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.498873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.498962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.499061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.582 [2024-11-17 19:39:08.499102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-11-17 19:39:08.499241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.499424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.499464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.499610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.499745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.499784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.499908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.500076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.500115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.500296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.500393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.500420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.500537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.500620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.500664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.500780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.500934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.500964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.501124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.501214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.501239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.501333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.501490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.501531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.501687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.501800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.501835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.501941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.502072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.502107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.502217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.502362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.502402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.502530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.502684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.502725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.502878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.502998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.503025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.503120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.503237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.503265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.503370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.503490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.503521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.503660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.503801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.503837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.503964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.504136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.504177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.504356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.504497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.504536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.504682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.504822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.504850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.504960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.505051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.505081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.505209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.505301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.505330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.505441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.505554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.505580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.505663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.505808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.505837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.505956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.506053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.506081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.506217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.506370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.506424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.506574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.506729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.506768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.506949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.507087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.507127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.507317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.507438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.507465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.507577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.507697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.507726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.583 [2024-11-17 19:39:08.507848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.507949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.583 [2024-11-17 19:39:08.507974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.583 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.508110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.508193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.508219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.508334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.508454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.508507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.508654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.508815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.508854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.509048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.509185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.509221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.509334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.509424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.509460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.509593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.509728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.509766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.509902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.510036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.510072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.510266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.510412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.510442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.510571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.510729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.510756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.510849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.510942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.510968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.511082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.511191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.511222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.511319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.511452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.511482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.511583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.511666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.511699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.511810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.511951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.511992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.512140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.512261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.512300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.512454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.512589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.512624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.512784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.512936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.512976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.513122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.513294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.513326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.513454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.513540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.513566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.513650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.513768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.513795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.513875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.514025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.514068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.514206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.514342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.514377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.514539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.514686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.514730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.514881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.515064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.515104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.515257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.515345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.515370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.515463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.515543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.515569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.515687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.515779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.515805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.515931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.516011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.516038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.584 qpair failed and we were unable to recover it. 00:30:10.584 [2024-11-17 19:39:08.516170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.584 [2024-11-17 19:39:08.516320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.516349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.516482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.516622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.516662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.516838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.516977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.517013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.517209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.517349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.517389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.517539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.517715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.517768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.517892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.517973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.517997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.518155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.518247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.518276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.518370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.518454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.518482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.518645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.518772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.518798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.518916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.519067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.519106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.519253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.519429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.519469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.519650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.519817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.519856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.520019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.520163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.520194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.520323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.520404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.520434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.520572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.520653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.520706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.520803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.520915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.520955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.521102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.521260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.521296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.521434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.521566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.521601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.521778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.521946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.521985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.522136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.522283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.522315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.522427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.522518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.522543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.522654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.522802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.522831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.522929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.523027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.523058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.523211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.523324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.523350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.585 qpair failed and we were unable to recover it. 00:30:10.585 [2024-11-17 19:39:08.523489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.585 [2024-11-17 19:39:08.523605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.523644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.523809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.523928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.523966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.524148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.524282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.524318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.524484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.524626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.524665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.524835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.524958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.524986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.525100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.525214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.525242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.525323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.525432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.525458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.525550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.525671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.525744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.525887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.526025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.526060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.526199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.526398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.526441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.526605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.526770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.526812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.526940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.527080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.527105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.527217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.527370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.527398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.527492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.527585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.527613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.527746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.527823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.527849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.527945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.528086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.528126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.528241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.528378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.528417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.528605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.528727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.528779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.528899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.529042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.529083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.529260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.529404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.529443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.529610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.529741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.529779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.529922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.530032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.530063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.530164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.530260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.530290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.530430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.530571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.530597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.530685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.530853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.530895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.531035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.531161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.531201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.531359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.531470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.531506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.586 [2024-11-17 19:39:08.531643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.531830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.586 [2024-11-17 19:39:08.531869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.586 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.532043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.532183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.532223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.532387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.532520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.532548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.532656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.532787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.532816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.532939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.533097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.533123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.533260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.533388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.533424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.533629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.533754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.533790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.533978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.534166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.534225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.534378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.534485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.534513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.534653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.534798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.534827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.534947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.535061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.535089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.535197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.535311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.535338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.535480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.535579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.535621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.535798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.535911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.535946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.536086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.536221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.536256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.536410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.536599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.536639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.536815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.536949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.536978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.537074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.537178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.537205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.537310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.537396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.537423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.537500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.537615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.537641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.537823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.537940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.537966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.538076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.538213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.538240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.538318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.538461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.538491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.538595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.538696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.538732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.538822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.538936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.538991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.539090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.539255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.539282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.539366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.539466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.539493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.539571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.539650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.539682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.539766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.539880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.539906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.540018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.540127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.540153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.540266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.540349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.540375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.587 qpair failed and we were unable to recover it. 00:30:10.587 [2024-11-17 19:39:08.540458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.540596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.587 [2024-11-17 19:39:08.540622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.540699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.540813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.540839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.540959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.541054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.541081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.541186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.541317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.541346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.541455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.541600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.541626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.541709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.541799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.541826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.541906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.542144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.542366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.542608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.542848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.542979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.543077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.543203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.543230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.543324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.543451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.543480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.543615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.543720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.543746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.543833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.543941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.543985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.544106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.544232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.544261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.544392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.544505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.544532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.544659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.544822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.544849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.544931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.545095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.545124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.545286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.545365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.545392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.545525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.545618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.545647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.545768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.545905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.545932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.546047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.546172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.546203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.546301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.546427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.546456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.546547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.546667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.546702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.546780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.546865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.546890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.588 qpair failed and we were unable to recover it. 00:30:10.588 [2024-11-17 19:39:08.547008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.588 [2024-11-17 19:39:08.547114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.547140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.547225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.547329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.547371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1331052 Killed "${NVMF_APP[@]}" "$@" 00:30:10.589 [2024-11-17 19:39:08.547486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.547644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.547681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 19:39:08 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:10.589 [2024-11-17 19:39:08.547853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.547969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.547995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 19:39:08 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.548110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 19:39:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:10.589 [2024-11-17 19:39:08.548237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.548273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 19:39:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.548382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:30:10.589 [2024-11-17 19:39:08.548475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.548502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.548621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.548804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.548841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.548971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.549079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.549108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.549246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.549360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.549387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.549486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.549572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.549601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.549742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.549885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.549911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.550010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.550124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.550149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.550280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.550400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.550429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.550522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.550636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.550665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.550802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.550938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.550963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.551088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.551183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.551217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.551340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.551437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.551465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.551605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.551719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.551745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 19:39:08 -- nvmf/common.sh@469 -- # nvmfpid=1332043 00:30:10.589 [2024-11-17 19:39:08.551862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 19:39:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:10.589 19:39:08 -- nvmf/common.sh@470 -- # waitforlisten 1332043 00:30:10.589 [2024-11-17 19:39:08.551954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.551984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.552136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 19:39:08 -- common/autotest_common.sh@829 -- # '[' -z 1332043 ']' 00:30:10.589 [2024-11-17 19:39:08.552233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.552262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 19:39:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.552361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 19:39:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:10.589 [2024-11-17 19:39:08.552478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.552504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 19:39:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.589 [2024-11-17 19:39:08.552630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 19:39:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:10.589 [2024-11-17 19:39:08.552737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.552767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 19:39:08 -- common/autotest_common.sh@10 -- # set +x 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.552853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.552946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.552975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.553087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.553176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.553207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.553323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.553454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.553482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.589 qpair failed and we were unable to recover it. 00:30:10.589 [2024-11-17 19:39:08.553572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.553702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.589 [2024-11-17 19:39:08.553731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.553841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.553957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.553984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.554097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.554209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.554235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.554365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.554504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.554531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.554617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.554729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.554756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.554866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.554966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.554996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.555087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.555239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.555268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.555418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.555555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.555579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.555715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.555807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.555839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.555986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.556065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.556091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.556173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.556286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.556312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.556447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.556570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.556598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.556690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.556825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.556851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.556962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.557063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.557089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.557201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.557288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.557336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.557430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.557531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.557558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.557692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.557791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.557817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.557910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.557984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.558009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.558123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.558237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.558285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.558439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.558534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.558560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.558678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.558780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.558810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.558934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.559061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.559090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.559221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.559306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.559333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.559477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.559597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.559626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.559788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.559905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.590 [2024-11-17 19:39:08.559933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.590 qpair failed and we were unable to recover it. 00:30:10.590 [2024-11-17 19:39:08.560032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.560154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.560180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.560296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.560383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.560410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.560545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.560639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.560668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.560812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.560908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.560938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.561024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.561110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.561135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.561278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.561382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.561412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.561513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.561598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.561623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.561749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.561840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.561883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.561973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.562057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.562085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.562217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.562308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.562334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.562442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.562551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.562594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.562744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.562886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.562912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.563039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.563120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.563145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.563273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.563364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.563392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.563506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.563599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.563630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.563759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.563871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.563896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.563999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.564094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.564122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.564228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.564349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.564378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.564490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.564603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.564644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.564755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.564842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.564870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.565009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.565123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.565149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.565228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.565336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.565362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.565453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.565569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.565596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.565689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.565807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.565834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.565970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.566083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.566109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.566238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.566389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.566417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.566514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.566643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.566668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.591 [2024-11-17 19:39:08.566812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.566929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.591 [2024-11-17 19:39:08.566955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.591 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.567101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.567231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.567259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.567354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.567450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.567478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.567581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.567729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.567755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.567860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.567946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.567974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.568090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.568210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.568239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.568354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.568463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.568490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.568585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.568686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.568714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.568799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.568921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.568948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.569052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.569138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.569165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.569276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.569364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.569391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.569502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.569606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.569648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.569789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.569881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.569906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.570021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.570112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.570138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.570264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.570383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.570410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.570519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.570605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.570631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.570732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.570886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.570913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.571012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.571124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.571166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.571274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.571414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.571439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.571600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.571715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.571744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.571854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.571939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.571966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.572096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.572206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.572233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.572343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.572457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.572484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.572607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.572721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.572748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.572839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.572954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.572980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.573095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.573211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.573238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.573325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.573440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.573466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.573546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.573655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.573688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.573770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.573876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.573902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.573986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.574098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.574123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.592 qpair failed and we were unable to recover it. 00:30:10.592 [2024-11-17 19:39:08.574215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.592 [2024-11-17 19:39:08.574298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.574324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.574438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.574523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.574549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.574628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.574768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.574794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.574933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.575040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.575066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.575145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.575260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.575286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.575392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.575502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.575527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.575664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.575788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.575815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.575909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.575987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.576012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.576096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.576179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.576205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.576280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.576402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.576427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.576563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.576681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.576707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.576795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.576879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.576904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.577016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.577103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.577129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.577238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.577324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.577349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.577483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.577594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.577620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.577745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.577834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.577861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.577953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.578068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.578096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.578234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.578327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.578356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.578445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.578559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.578585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.578707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.578817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.578845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.578988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.579123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.579149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.579237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.579343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.579369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.579478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.579565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.579591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.579705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.579817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.579844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.579961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.580037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.580063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.580159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.580478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.580508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.593 qpair failed and we were unable to recover it. 00:30:10.593 [2024-11-17 19:39:08.580623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.593 [2024-11-17 19:39:08.580746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.580775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.580858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.580978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.581005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.581090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.581170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.581196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.581276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.581414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.581440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.581515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.581596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.581623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.581720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.581806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.581832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.581944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.582052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.582078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.582162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.582272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.582298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.582412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.582552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.582579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.582655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.582748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.582773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.582862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.582978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.583005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.583094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.583168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.583199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.583338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.583425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.583451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.583536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.583614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.583639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.583725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.583804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.583830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.583907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.584013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.584039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.584145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.584228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.584256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.584394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.584501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.584527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.584643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.584755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.584782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.584892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.585137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.585410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.585629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.585866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.585969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.586080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.586224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.586251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.586389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.586498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.586524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.586663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.586825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.586851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.586943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.587192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.587414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.587615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.594 qpair failed and we were unable to recover it. 00:30:10.594 [2024-11-17 19:39:08.587873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.594 [2024-11-17 19:39:08.587979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.588096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.588211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.588237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.588343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.588452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.588477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.588579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.588697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.588724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.588832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.588922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.588947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.589060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.589145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.589171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.589285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.589399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.589425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.589541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.589632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.589657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.589756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.589852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.589878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.589994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.590109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.590136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.590256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.590398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.590424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.590573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.590694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.590720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.590813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.590898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.590925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.591009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.591122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.591150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.591262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.591350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.591376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.591515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.591628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.591654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.591781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.591869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.591894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.592006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.592140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.592165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.592259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.592343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.592370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.592486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.592604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.592628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.595 qpair failed and we were unable to recover it. 00:30:10.595 [2024-11-17 19:39:08.592717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.595 [2024-11-17 19:39:08.592828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.592853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.592990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.593079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.593104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.593245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.593352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.593376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.593516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.593627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.593651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.593767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.593852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.593877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.593956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.594067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.594091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.594236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.594351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.594375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.594488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.594597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.594621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.594732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.594840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.594863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.594976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.595064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.595088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.595202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.595321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.595344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.595459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.595551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.595579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.595696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.595805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.595828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.595966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.596055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.596078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.596188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.596339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.596349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:10.596 [2024-11-17 19:39:08.596371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.596410] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.596 [2024-11-17 19:39:08.596474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.596604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.596637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.596782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.596918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.596952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.597090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.597230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.597263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.597396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.597551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.597584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.597734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.597816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.597841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.597960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.598044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.598071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.598165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.598302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.598334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.598466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.598601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.598635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.598850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.598962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.598996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.599163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.599268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.599302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.599465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.599565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.599590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.599686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.599773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.599800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.599921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.600033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.600058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.600143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.600226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.600259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.600386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.600485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.600517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.596 [2024-11-17 19:39:08.600650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.600786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.596 [2024-11-17 19:39:08.600821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.596 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.600995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.601101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.601141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.601257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.601395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.601422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.601564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.601704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.601729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.601851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.601969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.601993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.602106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.602211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.602244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.602354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.602485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.602518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.602651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.602765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.602798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.602907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.603004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.603038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.603184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.603325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.603351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.603429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.603563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.603587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.603671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.603779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.603803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.603892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.604008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.604039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.604177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.604289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.604323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.604436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.604537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.604571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.604688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.604822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.604857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.604995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.605151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.605186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.605324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.605435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.605458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.605576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.605657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.605691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.605790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.605897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.605921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.606007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.606097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.606121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.606237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.606314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.606348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.606484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.606615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.606649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.606811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.606937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.606970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.607083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.607220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.607253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.607365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.607491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.607525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.607659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.607782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.607807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.607959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.608035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.608059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.608144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.608230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.608254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.608394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.608558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.608591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.608709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.608807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.608844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.597 [2024-11-17 19:39:08.609015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.609128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.597 [2024-11-17 19:39:08.609154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.597 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.609272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.609354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.609378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.609468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.609585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.609616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.609703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.609793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.609818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.609926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.610037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.610061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.610175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.610250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.610273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.610389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.610503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.610527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.610640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.610763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.610787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.610895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.610998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.611021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.611105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.611242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.611266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.611396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.611523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.611547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.611706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.611796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.611820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.611932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.612158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.612380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.612565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.612803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.612913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.612994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.613101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.613126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.613239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.613347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.613371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.613509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.613592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.613617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.613734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.613841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.613865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.613951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.614065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.614089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.614227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.614339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.614362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.614503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.614632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.614661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.614766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.614878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.614903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.614991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.615105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.615128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.615240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.615330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.615354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.615468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.615543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.615566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.615702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.615791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.615815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.615963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.616041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.616065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 qpair failed and we were unable to recover it. 00:30:10.598 [2024-11-17 19:39:08.616178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.616290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.598 [2024-11-17 19:39:08.616314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.598 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.598 [2024-11-17 19:39:08.664078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.598 [2024-11-17 19:39:08.754332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:10.598 [2024-11-17 19:39:08.754472] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.598 [2024-11-17 19:39:08.754490] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.599 [2024-11-17 19:39:08.754503] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.599 [2024-11-17 19:39:08.754613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:10.599 [2024-11-17 19:39:08.754687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:10.599 [2024-11-17 19:39:08.754773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:10.599 [2024-11-17 19:39:08.754776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-11-17 19:39:09.034050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-11-17 19:39:09.034167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.858 [2024-11-17 19:39:09.034194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.858 qpair failed and we were unable to recover it. 00:30:10.858 [2024-11-17 19:39:09.034289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.034395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.034421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-11-17 19:39:09.034551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.034688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.034714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-11-17 19:39:09.034867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.034967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.034993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-11-17 19:39:09.035124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.035221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.035256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2390 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-11-17 19:39:09.035404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.035537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.035567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-11-17 19:39:09.035712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.035810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.035836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-11-17 19:39:09.035923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.036007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.036040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-11-17 19:39:09.036164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.036277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.036303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-11-17 19:39:09.036401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.036503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.036536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 [2024-11-17 19:39:09.036649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.036774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.036809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0354000b90 with addr=10.0.0.2, port=4420 00:30:10.859 qpair failed and we were unable to recover it. 00:30:10.859 A controller has encountered a failure and is being reset. 00:30:10.859 [2024-11-17 19:39:09.036963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.037068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.859 [2024-11-17 19:39:09.037094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbfe00 with addr=10.0.0.2, port=4420 00:30:10.859 [2024-11-17 19:39:09.037111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbfe00 is same with the state(5) to be set 00:30:10.859 [2024-11-17 19:39:09.037134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbfe00 (9): Bad file descriptor 00:30:10.859 [2024-11-17 19:39:09.037152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:10.859 [2024-11-17 19:39:09.037166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:10.859 [2024-11-17 19:39:09.037181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.859 Unable to reset the controller. 00:30:11.436 19:39:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:11.436 19:39:09 -- common/autotest_common.sh@862 -- # return 0 00:30:11.436 19:39:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:11.436 19:39:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:11.436 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:30:11.436 19:39:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.436 19:39:09 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.436 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.436 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:30:11.436 Malloc0 00:30:11.436 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.436 19:39:09 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:11.436 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.436 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:30:11.436 [2024-11-17 19:39:09.624665] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.436 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.436 19:39:09 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.436 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.436 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:30:11.436 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.436 19:39:09 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.436 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.436 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:30:11.436 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.436 19:39:09 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.436 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.436 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:30:11.436 [2024-11-17 19:39:09.652917] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.436 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.436 19:39:09 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.436 19:39:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.436 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:30:11.436 19:39:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.436 19:39:09 -- host/target_disconnect.sh@58 -- # wait 1331212 00:30:12.003 Controller properly reset. 00:30:17.275 Initializing NVMe Controllers 00:30:17.275 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:17.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:17.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:17.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:17.275 Initialization complete. Launching workers. 00:30:17.275 Starting thread on core 1 00:30:17.275 Starting thread on core 2 00:30:17.275 Starting thread on core 3 00:30:17.275 Starting thread on core 0 00:30:17.275 19:39:14 -- host/target_disconnect.sh@59 -- # sync 00:30:17.275 00:30:17.275 real 0m11.495s 00:30:17.275 user 0m36.549s 00:30:17.275 sys 0m7.267s 00:30:17.275 19:39:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:17.275 19:39:14 -- common/autotest_common.sh@10 -- # set +x 00:30:17.275 ************************************ 00:30:17.275 END TEST nvmf_target_disconnect_tc2 00:30:17.275 ************************************ 00:30:17.275 19:39:14 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:17.275 19:39:14 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:17.275 19:39:14 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:17.275 19:39:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:17.275 19:39:14 -- nvmf/common.sh@116 -- # sync 00:30:17.275 19:39:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:17.275 19:39:14 -- nvmf/common.sh@119 -- # set +e 00:30:17.275 19:39:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:17.276 19:39:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:17.276 rmmod nvme_tcp 00:30:17.276 rmmod nvme_fabrics 00:30:17.276 rmmod nvme_keyring 00:30:17.276 19:39:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:17.276 19:39:14 -- nvmf/common.sh@123 -- # set -e 00:30:17.276 19:39:14 -- nvmf/common.sh@124 -- # return 0 00:30:17.276 19:39:14 -- nvmf/common.sh@477 -- # '[' -n 1332043 ']' 00:30:17.276 19:39:14 -- nvmf/common.sh@478 -- # killprocess 1332043 00:30:17.276 19:39:14 -- common/autotest_common.sh@936 -- # '[' -z 1332043 ']' 00:30:17.276 19:39:14 -- common/autotest_common.sh@940 -- # kill -0 1332043 00:30:17.276 19:39:14 -- common/autotest_common.sh@941 -- # uname 00:30:17.276 19:39:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:17.276 19:39:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1332043 00:30:17.276 19:39:14 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:30:17.276 19:39:14 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:30:17.276 19:39:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1332043' 00:30:17.276 killing process with pid 1332043 00:30:17.276 19:39:14 -- common/autotest_common.sh@955 -- # kill 1332043 00:30:17.276 19:39:14 -- common/autotest_common.sh@960 -- # wait 1332043 00:30:17.276 19:39:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:17.276 19:39:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:17.276 19:39:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:17.276 19:39:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:17.276 19:39:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:17.276 19:39:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.276 19:39:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.276 19:39:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.225 19:39:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:19.226 00:30:19.226 real 0m16.191s 00:30:19.226 user 1m2.438s 00:30:19.226 sys 0m9.472s 00:30:19.226 19:39:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:19.226 19:39:17 -- common/autotest_common.sh@10 -- # set +x 00:30:19.226 ************************************ 00:30:19.226 END TEST nvmf_target_disconnect 00:30:19.226 ************************************ 00:30:19.226 19:39:17 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:19.226 19:39:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:19.226 19:39:17 -- common/autotest_common.sh@10 -- # set +x 00:30:19.226 19:39:17 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:19.226 00:30:19.226 real 22m58.511s 00:30:19.226 user 67m14.597s 00:30:19.226 sys 5m28.469s 00:30:19.226 19:39:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:19.226 19:39:17 -- common/autotest_common.sh@10 -- # set +x 00:30:19.226 ************************************ 00:30:19.226 END TEST nvmf_tcp 00:30:19.226 ************************************ 00:30:19.226 19:39:17 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:30:19.226 19:39:17 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.226 19:39:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:19.226 19:39:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:19.226 19:39:17 -- common/autotest_common.sh@10 -- # set +x 00:30:19.226 ************************************ 00:30:19.226 START TEST spdkcli_nvmf_tcp 00:30:19.226 ************************************ 00:30:19.226 19:39:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.226 * Looking for test storage... 00:30:19.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:19.226 19:39:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:19.226 19:39:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:19.226 19:39:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:19.226 19:39:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:19.226 19:39:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:19.226 19:39:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:19.226 19:39:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:19.226 19:39:17 -- scripts/common.sh@335 -- # IFS=.-: 00:30:19.226 19:39:17 -- scripts/common.sh@335 -- # read -ra ver1 00:30:19.226 19:39:17 -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.226 19:39:17 -- scripts/common.sh@336 -- # read -ra ver2 00:30:19.226 19:39:17 -- scripts/common.sh@337 -- # local 'op=<' 00:30:19.226 19:39:17 -- scripts/common.sh@339 -- # ver1_l=2 00:30:19.226 19:39:17 -- scripts/common.sh@340 -- # ver2_l=1 00:30:19.226 19:39:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:19.226 19:39:17 -- scripts/common.sh@343 -- # case "$op" in 00:30:19.226 19:39:17 -- scripts/common.sh@344 -- # : 1 00:30:19.226 19:39:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:19.226 19:39:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.226 19:39:17 -- scripts/common.sh@364 -- # decimal 1 00:30:19.226 19:39:17 -- scripts/common.sh@352 -- # local d=1 00:30:19.226 19:39:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.226 19:39:17 -- scripts/common.sh@354 -- # echo 1 00:30:19.226 19:39:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:19.226 19:39:17 -- scripts/common.sh@365 -- # decimal 2 00:30:19.226 19:39:17 -- scripts/common.sh@352 -- # local d=2 00:30:19.226 19:39:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.226 19:39:17 -- scripts/common.sh@354 -- # echo 2 00:30:19.226 19:39:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:19.226 19:39:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:19.226 19:39:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:19.226 19:39:17 -- scripts/common.sh@367 -- # return 0 00:30:19.226 19:39:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.226 19:39:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:19.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.226 --rc genhtml_branch_coverage=1 00:30:19.226 --rc genhtml_function_coverage=1 00:30:19.226 --rc genhtml_legend=1 00:30:19.226 --rc geninfo_all_blocks=1 00:30:19.226 --rc geninfo_unexecuted_blocks=1 00:30:19.226 00:30:19.226 ' 00:30:19.226 19:39:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:19.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.226 --rc genhtml_branch_coverage=1 00:30:19.226 --rc genhtml_function_coverage=1 00:30:19.226 --rc genhtml_legend=1 00:30:19.226 --rc geninfo_all_blocks=1 00:30:19.226 --rc geninfo_unexecuted_blocks=1 00:30:19.226 00:30:19.226 ' 00:30:19.226 19:39:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:19.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.226 --rc genhtml_branch_coverage=1 00:30:19.226 --rc genhtml_function_coverage=1 00:30:19.226 --rc genhtml_legend=1 00:30:19.226 --rc geninfo_all_blocks=1 00:30:19.226 --rc geninfo_unexecuted_blocks=1 00:30:19.226 00:30:19.226 ' 00:30:19.226 19:39:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:19.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.226 --rc genhtml_branch_coverage=1 00:30:19.226 --rc genhtml_function_coverage=1 00:30:19.226 --rc genhtml_legend=1 00:30:19.226 --rc geninfo_all_blocks=1 00:30:19.226 --rc geninfo_unexecuted_blocks=1 00:30:19.226 00:30:19.226 ' 00:30:19.226 19:39:17 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:19.226 19:39:17 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:19.226 19:39:17 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:19.226 19:39:17 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.226 19:39:17 -- nvmf/common.sh@7 -- # uname -s 00:30:19.226 19:39:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.226 19:39:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.226 19:39:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.226 19:39:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.226 19:39:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.226 19:39:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.226 19:39:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.226 19:39:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.226 19:39:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.226 19:39:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.226 19:39:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:19.226 19:39:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:19.226 19:39:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.226 19:39:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.226 19:39:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.226 19:39:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.226 19:39:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.226 19:39:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.226 19:39:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.226 19:39:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.226 19:39:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.226 19:39:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.226 19:39:17 -- paths/export.sh@5 -- # export PATH 00:30:19.226 19:39:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.226 19:39:17 -- nvmf/common.sh@46 -- # : 0 00:30:19.226 19:39:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:19.226 19:39:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:19.226 19:39:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:19.226 19:39:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.226 19:39:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.226 19:39:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:19.226 19:39:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:19.226 19:39:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:19.226 19:39:17 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:19.226 19:39:17 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:19.226 19:39:17 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:19.226 19:39:17 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:19.226 19:39:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:19.226 19:39:17 -- common/autotest_common.sh@10 -- # set +x 00:30:19.226 19:39:17 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:19.226 19:39:17 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1333419 00:30:19.226 19:39:17 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:19.226 19:39:17 -- spdkcli/common.sh@34 -- # waitforlisten 1333419 00:30:19.226 19:39:17 -- common/autotest_common.sh@829 -- # '[' -z 1333419 ']' 00:30:19.227 19:39:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.227 19:39:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:19.227 19:39:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.227 19:39:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:19.227 19:39:17 -- common/autotest_common.sh@10 -- # set +x 00:30:19.487 [2024-11-17 19:39:17.506692] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:19.487 [2024-11-17 19:39:17.506777] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333419 ] 00:30:19.487 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.487 [2024-11-17 19:39:17.570267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:19.487 [2024-11-17 19:39:17.660319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:19.487 [2024-11-17 19:39:17.660554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.487 [2024-11-17 19:39:17.660560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.424 19:39:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:20.424 19:39:18 -- common/autotest_common.sh@862 -- # return 0 00:30:20.424 19:39:18 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:20.424 19:39:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:20.424 19:39:18 -- common/autotest_common.sh@10 -- # set +x 00:30:20.424 19:39:18 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:20.424 19:39:18 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:20.424 19:39:18 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:20.424 19:39:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:20.424 19:39:18 -- common/autotest_common.sh@10 -- # set +x 00:30:20.424 19:39:18 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:20.424 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:20.424 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:20.424 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:20.424 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:20.424 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:20.424 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:20.424 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.424 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.424 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:20.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:20.424 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:20.424 ' 00:30:20.684 [2024-11-17 19:39:18.943570] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:23.225 [2024-11-17 19:39:21.138239] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.166 [2024-11-17 19:39:22.386657] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:26.701 [2024-11-17 19:39:24.698058] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:28.606 [2024-11-17 19:39:26.700499] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:29.988 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:29.988 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:29.988 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:29.988 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:29.988 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:29.988 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:29.988 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:29.988 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.988 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.988 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:29.988 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:29.988 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:30.246 19:39:28 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:30.247 19:39:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:30.247 19:39:28 -- common/autotest_common.sh@10 -- # set +x 00:30:30.247 19:39:28 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:30.247 19:39:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:30.247 19:39:28 -- common/autotest_common.sh@10 -- # set +x 00:30:30.247 19:39:28 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:30.247 19:39:28 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:30.812 19:39:28 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:30.812 19:39:28 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:30.812 19:39:28 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:30.812 19:39:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:30.812 19:39:28 -- common/autotest_common.sh@10 -- # set +x 00:30:30.812 19:39:28 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:30.812 19:39:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:30.812 19:39:28 -- common/autotest_common.sh@10 -- # set +x 00:30:30.812 19:39:28 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:30.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:30.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:30.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:30.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:30.812 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:30.812 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:30.812 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:30.812 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:30.812 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:30.812 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:30.812 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:30.812 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:30.812 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:30.812 ' 00:30:36.087 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:36.087 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:36.087 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:36.087 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:36.087 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:36.087 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:36.087 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:36.087 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:36.087 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:36.087 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:36.087 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:36.087 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:36.087 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:36.087 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:36.087 19:39:34 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:36.087 19:39:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:36.087 19:39:34 -- common/autotest_common.sh@10 -- # set +x 00:30:36.087 19:39:34 -- spdkcli/nvmf.sh@90 -- # killprocess 1333419 00:30:36.087 19:39:34 -- common/autotest_common.sh@936 -- # '[' -z 1333419 ']' 00:30:36.087 19:39:34 -- common/autotest_common.sh@940 -- # kill -0 1333419 00:30:36.087 19:39:34 -- common/autotest_common.sh@941 -- # uname 00:30:36.087 19:39:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:36.087 19:39:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1333419 00:30:36.087 19:39:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:36.087 19:39:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:36.087 19:39:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1333419' 00:30:36.087 killing process with pid 1333419 00:30:36.087 19:39:34 -- common/autotest_common.sh@955 -- # kill 1333419 00:30:36.087 [2024-11-17 19:39:34.127285] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:36.087 19:39:34 -- common/autotest_common.sh@960 -- # wait 1333419 00:30:36.087 19:39:34 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:36.087 19:39:34 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:36.087 19:39:34 -- spdkcli/common.sh@13 -- # '[' -n 1333419 ']' 00:30:36.087 19:39:34 -- spdkcli/common.sh@14 -- # killprocess 1333419 00:30:36.087 19:39:34 -- common/autotest_common.sh@936 -- # '[' -z 1333419 ']' 00:30:36.087 19:39:34 -- common/autotest_common.sh@940 -- # kill -0 1333419 00:30:36.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1333419) - No such process 00:30:36.087 19:39:34 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1333419 is not found' 00:30:36.087 Process with pid 1333419 is not found 00:30:36.087 19:39:34 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:36.087 19:39:34 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:36.087 19:39:34 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:36.087 00:30:36.087 real 0m17.028s 00:30:36.087 user 0m36.371s 00:30:36.087 sys 0m0.803s 00:30:36.087 19:39:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:36.087 19:39:34 -- common/autotest_common.sh@10 -- # set +x 00:30:36.087 ************************************ 00:30:36.087 END TEST spdkcli_nvmf_tcp 00:30:36.087 ************************************ 00:30:36.348 19:39:34 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:36.348 19:39:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:36.348 19:39:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:36.348 19:39:34 -- common/autotest_common.sh@10 -- # set +x 00:30:36.348 ************************************ 00:30:36.348 START TEST nvmf_identify_passthru 00:30:36.348 ************************************ 00:30:36.348 19:39:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:36.348 * Looking for test storage... 00:30:36.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.348 19:39:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:36.348 19:39:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:36.348 19:39:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:36.348 19:39:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:36.348 19:39:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:36.348 19:39:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:36.348 19:39:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:36.348 19:39:34 -- scripts/common.sh@335 -- # IFS=.-: 00:30:36.348 19:39:34 -- scripts/common.sh@335 -- # read -ra ver1 00:30:36.348 19:39:34 -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.348 19:39:34 -- scripts/common.sh@336 -- # read -ra ver2 00:30:36.348 19:39:34 -- scripts/common.sh@337 -- # local 'op=<' 00:30:36.348 19:39:34 -- scripts/common.sh@339 -- # ver1_l=2 00:30:36.348 19:39:34 -- scripts/common.sh@340 -- # ver2_l=1 00:30:36.348 19:39:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:36.348 19:39:34 -- scripts/common.sh@343 -- # case "$op" in 00:30:36.348 19:39:34 -- scripts/common.sh@344 -- # : 1 00:30:36.348 19:39:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:36.348 19:39:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.348 19:39:34 -- scripts/common.sh@364 -- # decimal 1 00:30:36.348 19:39:34 -- scripts/common.sh@352 -- # local d=1 00:30:36.348 19:39:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.348 19:39:34 -- scripts/common.sh@354 -- # echo 1 00:30:36.348 19:39:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:36.348 19:39:34 -- scripts/common.sh@365 -- # decimal 2 00:30:36.348 19:39:34 -- scripts/common.sh@352 -- # local d=2 00:30:36.348 19:39:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.348 19:39:34 -- scripts/common.sh@354 -- # echo 2 00:30:36.348 19:39:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:36.348 19:39:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:36.349 19:39:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:36.349 19:39:34 -- scripts/common.sh@367 -- # return 0 00:30:36.349 19:39:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.349 19:39:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:36.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.349 --rc genhtml_branch_coverage=1 00:30:36.349 --rc genhtml_function_coverage=1 00:30:36.349 --rc genhtml_legend=1 00:30:36.349 --rc geninfo_all_blocks=1 00:30:36.349 --rc geninfo_unexecuted_blocks=1 00:30:36.349 00:30:36.349 ' 00:30:36.349 19:39:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:36.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.349 --rc genhtml_branch_coverage=1 00:30:36.349 --rc genhtml_function_coverage=1 00:30:36.349 --rc genhtml_legend=1 00:30:36.349 --rc geninfo_all_blocks=1 00:30:36.349 --rc geninfo_unexecuted_blocks=1 00:30:36.349 00:30:36.349 ' 00:30:36.349 19:39:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:36.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.349 --rc genhtml_branch_coverage=1 00:30:36.349 --rc genhtml_function_coverage=1 00:30:36.349 --rc genhtml_legend=1 00:30:36.349 --rc geninfo_all_blocks=1 00:30:36.349 --rc geninfo_unexecuted_blocks=1 00:30:36.349 00:30:36.349 ' 00:30:36.349 19:39:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:36.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.349 --rc genhtml_branch_coverage=1 00:30:36.349 --rc genhtml_function_coverage=1 00:30:36.349 --rc genhtml_legend=1 00:30:36.349 --rc geninfo_all_blocks=1 00:30:36.349 --rc geninfo_unexecuted_blocks=1 00:30:36.349 00:30:36.349 ' 00:30:36.349 19:39:34 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.349 19:39:34 -- nvmf/common.sh@7 -- # uname -s 00:30:36.349 19:39:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.349 19:39:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.349 19:39:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.349 19:39:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.349 19:39:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.349 19:39:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.349 19:39:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.349 19:39:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.349 19:39:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.349 19:39:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.349 19:39:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.349 19:39:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.349 19:39:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.349 19:39:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.349 19:39:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.349 19:39:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.349 19:39:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.349 19:39:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.349 19:39:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.349 19:39:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.349 19:39:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.349 19:39:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.349 19:39:34 -- paths/export.sh@5 -- # export PATH 00:30:36.349 19:39:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.349 19:39:34 -- nvmf/common.sh@46 -- # : 0 00:30:36.349 19:39:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:36.349 19:39:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:36.349 19:39:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:36.349 19:39:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.349 19:39:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.349 19:39:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:36.349 19:39:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:36.349 19:39:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:36.349 19:39:34 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.349 19:39:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.349 19:39:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.349 19:39:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.349 19:39:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.349 19:39:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.349 19:39:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.349 19:39:34 -- paths/export.sh@5 -- # export PATH 00:30:36.349 19:39:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.349 19:39:34 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:36.349 19:39:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:36.349 19:39:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.349 19:39:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:36.349 19:39:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:36.349 19:39:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:36.349 19:39:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.349 19:39:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:36.349 19:39:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.349 19:39:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:36.349 19:39:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:36.349 19:39:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:36.349 19:39:34 -- common/autotest_common.sh@10 -- # set +x 00:30:38.256 19:39:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:38.256 19:39:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:38.256 19:39:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:38.256 19:39:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:38.256 19:39:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:38.256 19:39:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:38.256 19:39:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:38.256 19:39:36 -- nvmf/common.sh@294 -- # net_devs=() 00:30:38.256 19:39:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:38.256 19:39:36 -- nvmf/common.sh@295 -- # e810=() 00:30:38.256 19:39:36 -- nvmf/common.sh@295 -- # local -ga e810 00:30:38.256 19:39:36 -- nvmf/common.sh@296 -- # x722=() 00:30:38.256 19:39:36 -- nvmf/common.sh@296 -- # local -ga x722 00:30:38.256 19:39:36 -- nvmf/common.sh@297 -- # mlx=() 00:30:38.256 19:39:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:38.256 19:39:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.256 19:39:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:38.256 19:39:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:38.256 19:39:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:38.256 19:39:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:38.256 19:39:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:38.256 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:38.256 19:39:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:38.256 19:39:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:38.256 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:38.256 19:39:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:38.256 19:39:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:38.256 19:39:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.256 19:39:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:38.256 19:39:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.256 19:39:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:38.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:38.256 19:39:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.256 19:39:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:38.256 19:39:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.256 19:39:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:38.256 19:39:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.256 19:39:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:38.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:38.256 19:39:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.256 19:39:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:38.256 19:39:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:38.256 19:39:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:38.256 19:39:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:38.256 19:39:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.256 19:39:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.256 19:39:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.256 19:39:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:38.256 19:39:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.256 19:39:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.256 19:39:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:38.256 19:39:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.256 19:39:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.256 19:39:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:38.256 19:39:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:38.256 19:39:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.256 19:39:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.515 19:39:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.515 19:39:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.515 19:39:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:38.515 19:39:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.515 19:39:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.515 19:39:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.515 19:39:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:38.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:30:38.515 00:30:38.515 --- 10.0.0.2 ping statistics --- 00:30:38.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.515 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:30:38.515 19:39:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:30:38.515 00:30:38.515 --- 10.0.0.1 ping statistics --- 00:30:38.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.515 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:30:38.515 19:39:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.515 19:39:36 -- nvmf/common.sh@410 -- # return 0 00:30:38.515 19:39:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:38.515 19:39:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.515 19:39:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:38.515 19:39:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:38.515 19:39:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.515 19:39:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:38.515 19:39:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:38.516 19:39:36 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:38.516 19:39:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:38.516 19:39:36 -- common/autotest_common.sh@10 -- # set +x 00:30:38.516 19:39:36 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:38.516 19:39:36 -- common/autotest_common.sh@1519 -- # bdfs=() 00:30:38.516 19:39:36 -- common/autotest_common.sh@1519 -- # local bdfs 00:30:38.516 19:39:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:30:38.516 19:39:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:30:38.516 19:39:36 -- common/autotest_common.sh@1508 -- # bdfs=() 00:30:38.516 19:39:36 -- common/autotest_common.sh@1508 -- # local bdfs 00:30:38.516 19:39:36 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:38.516 19:39:36 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:38.516 19:39:36 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:30:38.776 19:39:36 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:30:38.776 19:39:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:88:00.0 00:30:38.776 19:39:36 -- common/autotest_common.sh@1522 -- # echo 0000:88:00.0 00:30:38.776 19:39:36 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:38.776 19:39:36 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:38.776 19:39:36 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:38.776 19:39:36 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:38.776 19:39:36 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:38.776 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.969 19:39:41 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:42.969 19:39:41 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:42.969 19:39:41 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:42.969 19:39:41 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:42.969 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.158 19:39:45 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:47.158 19:39:45 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:47.158 19:39:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:47.158 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:30:47.158 19:39:45 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:47.158 19:39:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:47.158 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:30:47.158 19:39:45 -- target/identify_passthru.sh@31 -- # nvmfpid=1338160 00:30:47.158 19:39:45 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:47.158 19:39:45 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:47.158 19:39:45 -- target/identify_passthru.sh@35 -- # waitforlisten 1338160 00:30:47.158 19:39:45 -- common/autotest_common.sh@829 -- # '[' -z 1338160 ']' 00:30:47.158 19:39:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.158 19:39:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:47.158 19:39:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.158 19:39:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:47.158 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:30:47.158 [2024-11-17 19:39:45.328320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:47.158 [2024-11-17 19:39:45.328388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.158 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.158 [2024-11-17 19:39:45.391878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:47.418 [2024-11-17 19:39:45.483426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:47.418 [2024-11-17 19:39:45.483554] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.418 [2024-11-17 19:39:45.483572] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.418 [2024-11-17 19:39:45.483585] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.418 [2024-11-17 19:39:45.483638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.418 [2024-11-17 19:39:45.483666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.418 [2024-11-17 19:39:45.483704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:47.418 [2024-11-17 19:39:45.483708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.418 19:39:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:47.418 19:39:45 -- common/autotest_common.sh@862 -- # return 0 00:30:47.418 19:39:45 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:47.418 19:39:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.418 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:30:47.418 INFO: Log level set to 20 00:30:47.418 INFO: Requests: 00:30:47.418 { 00:30:47.418 "jsonrpc": "2.0", 00:30:47.418 "method": "nvmf_set_config", 00:30:47.418 "id": 1, 00:30:47.418 "params": { 00:30:47.418 "admin_cmd_passthru": { 00:30:47.418 "identify_ctrlr": true 00:30:47.418 } 00:30:47.418 } 00:30:47.418 } 00:30:47.418 00:30:47.418 INFO: response: 00:30:47.418 { 00:30:47.418 "jsonrpc": "2.0", 00:30:47.418 "id": 1, 00:30:47.418 "result": true 00:30:47.418 } 00:30:47.418 00:30:47.418 19:39:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.418 19:39:45 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:47.418 19:39:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.418 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:30:47.418 INFO: Setting log level to 20 00:30:47.418 INFO: Setting log level to 20 00:30:47.418 INFO: Log level set to 20 00:30:47.418 INFO: Log level set to 20 00:30:47.418 INFO: Requests: 00:30:47.418 { 00:30:47.418 "jsonrpc": "2.0", 00:30:47.418 "method": "framework_start_init", 00:30:47.418 "id": 1 00:30:47.418 } 00:30:47.418 00:30:47.418 INFO: Requests: 00:30:47.418 { 00:30:47.418 "jsonrpc": "2.0", 00:30:47.418 "method": "framework_start_init", 00:30:47.418 "id": 1 00:30:47.418 } 00:30:47.418 00:30:47.418 [2024-11-17 19:39:45.681030] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:47.678 INFO: response: 00:30:47.678 { 00:30:47.678 "jsonrpc": "2.0", 00:30:47.678 "id": 1, 00:30:47.678 "result": true 00:30:47.678 } 00:30:47.678 00:30:47.678 INFO: response: 00:30:47.678 { 00:30:47.678 "jsonrpc": "2.0", 00:30:47.678 "id": 1, 00:30:47.678 "result": true 00:30:47.678 } 00:30:47.678 00:30:47.678 19:39:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.678 19:39:45 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:47.678 19:39:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.678 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:30:47.678 INFO: Setting log level to 40 00:30:47.678 INFO: Setting log level to 40 00:30:47.678 INFO: Setting log level to 40 00:30:47.678 [2024-11-17 19:39:45.691165] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.678 19:39:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.678 19:39:45 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:47.678 19:39:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:47.678 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:30:47.678 19:39:45 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:47.678 19:39:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.678 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:30:50.966 Nvme0n1 00:30:50.966 19:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.966 19:39:48 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:50.966 19:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.966 19:39:48 -- common/autotest_common.sh@10 -- # set +x 00:30:50.966 19:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.966 19:39:48 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:50.966 19:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.966 19:39:48 -- common/autotest_common.sh@10 -- # set +x 00:30:50.966 19:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.966 19:39:48 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.966 19:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.966 19:39:48 -- common/autotest_common.sh@10 -- # set +x 00:30:50.966 [2024-11-17 19:39:48.581887] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.966 19:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.966 19:39:48 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:50.966 19:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.966 19:39:48 -- common/autotest_common.sh@10 -- # set +x 00:30:50.966 [2024-11-17 19:39:48.589595] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:50.966 [ 00:30:50.966 { 00:30:50.966 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:50.966 "subtype": "Discovery", 00:30:50.966 "listen_addresses": [], 00:30:50.966 "allow_any_host": true, 00:30:50.966 "hosts": [] 00:30:50.966 }, 00:30:50.966 { 00:30:50.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.966 "subtype": "NVMe", 00:30:50.966 "listen_addresses": [ 00:30:50.966 { 00:30:50.966 "transport": "TCP", 00:30:50.966 "trtype": "TCP", 00:30:50.966 "adrfam": "IPv4", 00:30:50.966 "traddr": "10.0.0.2", 00:30:50.966 "trsvcid": "4420" 00:30:50.966 } 00:30:50.966 ], 00:30:50.966 "allow_any_host": true, 00:30:50.966 "hosts": [], 00:30:50.966 "serial_number": "SPDK00000000000001", 00:30:50.966 "model_number": "SPDK bdev Controller", 00:30:50.966 "max_namespaces": 1, 00:30:50.966 "min_cntlid": 1, 00:30:50.966 "max_cntlid": 65519, 00:30:50.966 "namespaces": [ 00:30:50.966 { 00:30:50.966 "nsid": 1, 00:30:50.966 "bdev_name": "Nvme0n1", 00:30:50.966 "name": "Nvme0n1", 00:30:50.966 "nguid": "ED4C7D611A64481E98D010BE64897806", 00:30:50.966 "uuid": "ed4c7d61-1a64-481e-98d0-10be64897806" 00:30:50.966 } 00:30:50.966 ] 00:30:50.966 } 00:30:50.966 ] 00:30:50.966 19:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.966 19:39:48 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:50.966 19:39:48 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:50.966 19:39:48 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:50.966 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.966 19:39:48 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:50.966 19:39:48 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:50.966 19:39:48 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:50.966 19:39:48 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:50.966 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.966 19:39:48 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:50.966 19:39:48 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:50.966 19:39:48 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:50.966 19:39:48 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:50.966 19:39:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.966 19:39:48 -- common/autotest_common.sh@10 -- # set +x 00:30:50.966 19:39:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.966 19:39:48 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:50.966 19:39:48 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:50.966 19:39:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:50.966 19:39:48 -- nvmf/common.sh@116 -- # sync 00:30:50.966 19:39:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:50.966 19:39:48 -- nvmf/common.sh@119 -- # set +e 00:30:50.966 19:39:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:50.966 19:39:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:50.966 rmmod nvme_tcp 00:30:50.966 rmmod nvme_fabrics 00:30:50.966 rmmod nvme_keyring 00:30:50.966 19:39:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:50.966 19:39:48 -- nvmf/common.sh@123 -- # set -e 00:30:50.966 19:39:48 -- nvmf/common.sh@124 -- # return 0 00:30:50.966 19:39:48 -- nvmf/common.sh@477 -- # '[' -n 1338160 ']' 00:30:50.966 19:39:48 -- nvmf/common.sh@478 -- # killprocess 1338160 00:30:50.966 19:39:48 -- common/autotest_common.sh@936 -- # '[' -z 1338160 ']' 00:30:50.966 19:39:48 -- common/autotest_common.sh@940 -- # kill -0 1338160 00:30:50.966 19:39:48 -- common/autotest_common.sh@941 -- # uname 00:30:50.967 19:39:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:50.967 19:39:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1338160 00:30:50.967 19:39:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:50.967 19:39:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:50.967 19:39:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1338160' 00:30:50.967 killing process with pid 1338160 00:30:50.967 19:39:48 -- common/autotest_common.sh@955 -- # kill 1338160 00:30:50.967 [2024-11-17 19:39:48.958524] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:50.967 19:39:48 -- common/autotest_common.sh@960 -- # wait 1338160 00:30:52.343 19:39:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:52.343 19:39:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:52.343 19:39:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:52.343 19:39:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.343 19:39:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:52.343 19:39:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.343 19:39:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:52.343 19:39:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.879 19:39:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:54.879 00:30:54.879 real 0m18.190s 00:30:54.879 user 0m26.691s 00:30:54.879 sys 0m2.324s 00:30:54.879 19:39:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:54.879 19:39:52 -- common/autotest_common.sh@10 -- # set +x 00:30:54.879 ************************************ 00:30:54.879 END TEST nvmf_identify_passthru 00:30:54.879 ************************************ 00:30:54.879 19:39:52 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:54.879 19:39:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:54.879 19:39:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:54.879 19:39:52 -- common/autotest_common.sh@10 -- # set +x 00:30:54.879 ************************************ 00:30:54.879 START TEST nvmf_dif 00:30:54.879 ************************************ 00:30:54.879 19:39:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:54.879 * Looking for test storage... 00:30:54.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:54.879 19:39:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:30:54.879 19:39:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:30:54.879 19:39:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:30:54.879 19:39:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:30:54.879 19:39:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:30:54.879 19:39:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:54.879 19:39:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:54.879 19:39:52 -- scripts/common.sh@335 -- # IFS=.-: 00:30:54.879 19:39:52 -- scripts/common.sh@335 -- # read -ra ver1 00:30:54.879 19:39:52 -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.879 19:39:52 -- scripts/common.sh@336 -- # read -ra ver2 00:30:54.879 19:39:52 -- scripts/common.sh@337 -- # local 'op=<' 00:30:54.879 19:39:52 -- scripts/common.sh@339 -- # ver1_l=2 00:30:54.879 19:39:52 -- scripts/common.sh@340 -- # ver2_l=1 00:30:54.879 19:39:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:54.879 19:39:52 -- scripts/common.sh@343 -- # case "$op" in 00:30:54.879 19:39:52 -- scripts/common.sh@344 -- # : 1 00:30:54.879 19:39:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:54.879 19:39:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.879 19:39:52 -- scripts/common.sh@364 -- # decimal 1 00:30:54.879 19:39:52 -- scripts/common.sh@352 -- # local d=1 00:30:54.879 19:39:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.879 19:39:52 -- scripts/common.sh@354 -- # echo 1 00:30:54.879 19:39:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:30:54.879 19:39:52 -- scripts/common.sh@365 -- # decimal 2 00:30:54.879 19:39:52 -- scripts/common.sh@352 -- # local d=2 00:30:54.879 19:39:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.879 19:39:52 -- scripts/common.sh@354 -- # echo 2 00:30:54.879 19:39:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:30:54.879 19:39:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:54.879 19:39:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:54.879 19:39:52 -- scripts/common.sh@367 -- # return 0 00:30:54.879 19:39:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.879 19:39:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:30:54.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.879 --rc genhtml_branch_coverage=1 00:30:54.879 --rc genhtml_function_coverage=1 00:30:54.879 --rc genhtml_legend=1 00:30:54.879 --rc geninfo_all_blocks=1 00:30:54.879 --rc geninfo_unexecuted_blocks=1 00:30:54.879 00:30:54.879 ' 00:30:54.879 19:39:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:30:54.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.879 --rc genhtml_branch_coverage=1 00:30:54.879 --rc genhtml_function_coverage=1 00:30:54.879 --rc genhtml_legend=1 00:30:54.879 --rc geninfo_all_blocks=1 00:30:54.879 --rc geninfo_unexecuted_blocks=1 00:30:54.879 00:30:54.879 ' 00:30:54.879 19:39:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:30:54.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.879 --rc genhtml_branch_coverage=1 00:30:54.879 --rc genhtml_function_coverage=1 00:30:54.879 --rc genhtml_legend=1 00:30:54.879 --rc geninfo_all_blocks=1 00:30:54.879 --rc geninfo_unexecuted_blocks=1 00:30:54.879 00:30:54.879 ' 00:30:54.879 19:39:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:30:54.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.879 --rc genhtml_branch_coverage=1 00:30:54.879 --rc genhtml_function_coverage=1 00:30:54.879 --rc genhtml_legend=1 00:30:54.879 --rc geninfo_all_blocks=1 00:30:54.879 --rc geninfo_unexecuted_blocks=1 00:30:54.879 00:30:54.879 ' 00:30:54.879 19:39:52 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.879 19:39:52 -- nvmf/common.sh@7 -- # uname -s 00:30:54.879 19:39:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.879 19:39:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.879 19:39:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.879 19:39:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.879 19:39:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.879 19:39:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.879 19:39:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.879 19:39:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.879 19:39:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.879 19:39:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.879 19:39:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:54.879 19:39:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:54.879 19:39:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.879 19:39:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.879 19:39:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.879 19:39:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.879 19:39:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.879 19:39:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.879 19:39:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.879 19:39:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.879 19:39:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.879 19:39:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.879 19:39:52 -- paths/export.sh@5 -- # export PATH 00:30:54.879 19:39:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.879 19:39:52 -- nvmf/common.sh@46 -- # : 0 00:30:54.879 19:39:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:54.879 19:39:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:54.879 19:39:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:54.879 19:39:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.879 19:39:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.879 19:39:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:54.879 19:39:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:54.879 19:39:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:54.879 19:39:52 -- target/dif.sh@15 -- # NULL_META=16 00:30:54.879 19:39:52 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:54.879 19:39:52 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:54.879 19:39:52 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:54.879 19:39:52 -- target/dif.sh@135 -- # nvmftestinit 00:30:54.879 19:39:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:54.879 19:39:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.879 19:39:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:54.879 19:39:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:54.879 19:39:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:54.880 19:39:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.880 19:39:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:54.880 19:39:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.880 19:39:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:54.880 19:39:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:54.880 19:39:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:54.880 19:39:52 -- common/autotest_common.sh@10 -- # set +x 00:30:56.789 19:39:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:56.789 19:39:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:56.789 19:39:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:56.789 19:39:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:56.789 19:39:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:56.789 19:39:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:56.789 19:39:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:56.789 19:39:54 -- nvmf/common.sh@294 -- # net_devs=() 00:30:56.789 19:39:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:56.789 19:39:54 -- nvmf/common.sh@295 -- # e810=() 00:30:56.790 19:39:54 -- nvmf/common.sh@295 -- # local -ga e810 00:30:56.790 19:39:54 -- nvmf/common.sh@296 -- # x722=() 00:30:56.790 19:39:54 -- nvmf/common.sh@296 -- # local -ga x722 00:30:56.790 19:39:54 -- nvmf/common.sh@297 -- # mlx=() 00:30:56.790 19:39:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:56.790 19:39:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.790 19:39:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:56.790 19:39:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:56.790 19:39:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:56.790 19:39:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:56.790 19:39:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:56.790 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:56.790 19:39:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:56.790 19:39:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:56.790 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:56.790 19:39:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:56.790 19:39:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:56.790 19:39:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.790 19:39:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:56.790 19:39:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.790 19:39:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:56.790 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:56.790 19:39:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.790 19:39:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:56.790 19:39:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.790 19:39:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:56.790 19:39:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.790 19:39:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:56.790 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:56.790 19:39:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.790 19:39:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:56.790 19:39:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:56.790 19:39:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:56.790 19:39:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:56.790 19:39:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.790 19:39:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.790 19:39:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.790 19:39:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:56.790 19:39:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.790 19:39:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.790 19:39:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:56.790 19:39:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.790 19:39:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.790 19:39:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:56.790 19:39:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:56.790 19:39:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.790 19:39:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.790 19:39:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.790 19:39:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.790 19:39:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:56.790 19:39:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.790 19:39:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.790 19:39:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.790 19:39:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:56.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:30:56.790 00:30:56.790 --- 10.0.0.2 ping statistics --- 00:30:56.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.790 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:30:56.790 19:39:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:30:56.790 00:30:56.790 --- 10.0.0.1 ping statistics --- 00:30:56.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.790 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:30:56.790 19:39:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.790 19:39:54 -- nvmf/common.sh@410 -- # return 0 00:30:56.790 19:39:54 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:56.790 19:39:54 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:57.725 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:57.725 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:57.725 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:57.725 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:57.725 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:57.725 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:57.725 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:57.725 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:57.725 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:57.725 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:57.725 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:57.725 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:57.725 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:57.725 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:57.725 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:57.725 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:57.725 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:57.725 19:39:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.725 19:39:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:57.725 19:39:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:57.725 19:39:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.725 19:39:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:57.725 19:39:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:57.984 19:39:56 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:57.984 19:39:56 -- target/dif.sh@137 -- # nvmfappstart 00:30:57.984 19:39:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:57.984 19:39:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:57.984 19:39:56 -- common/autotest_common.sh@10 -- # set +x 00:30:57.984 19:39:56 -- nvmf/common.sh@469 -- # nvmfpid=1341491 00:30:57.984 19:39:56 -- nvmf/common.sh@470 -- # waitforlisten 1341491 00:30:57.984 19:39:56 -- common/autotest_common.sh@829 -- # '[' -z 1341491 ']' 00:30:57.984 19:39:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.984 19:39:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:57.984 19:39:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:57.984 19:39:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.984 19:39:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:57.984 19:39:56 -- common/autotest_common.sh@10 -- # set +x 00:30:57.984 [2024-11-17 19:39:56.055899] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:30:57.984 [2024-11-17 19:39:56.055971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.984 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.984 [2024-11-17 19:39:56.123732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.984 [2024-11-17 19:39:56.211332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:57.984 [2024-11-17 19:39:56.211502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.984 [2024-11-17 19:39:56.211522] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.984 [2024-11-17 19:39:56.211536] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.984 [2024-11-17 19:39:56.211568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.919 19:39:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:58.919 19:39:57 -- common/autotest_common.sh@862 -- # return 0 00:30:58.919 19:39:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:58.919 19:39:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:58.919 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 19:39:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.919 19:39:57 -- target/dif.sh@139 -- # create_transport 00:30:58.919 19:39:57 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:58.919 19:39:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.919 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 [2024-11-17 19:39:57.032029] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.919 19:39:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.919 19:39:57 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:58.919 19:39:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:58.919 19:39:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:58.919 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 ************************************ 00:30:58.919 START TEST fio_dif_1_default 00:30:58.919 ************************************ 00:30:58.919 19:39:57 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:30:58.919 19:39:57 -- target/dif.sh@86 -- # create_subsystems 0 00:30:58.919 19:39:57 -- target/dif.sh@28 -- # local sub 00:30:58.919 19:39:57 -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.919 19:39:57 -- target/dif.sh@31 -- # create_subsystem 0 00:30:58.919 19:39:57 -- target/dif.sh@18 -- # local sub_id=0 00:30:58.919 19:39:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:58.919 19:39:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.919 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 bdev_null0 00:30:58.919 19:39:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.919 19:39:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:58.919 19:39:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.919 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 19:39:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.919 19:39:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:58.919 19:39:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.919 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 19:39:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.919 19:39:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:58.919 19:39:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.919 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 [2024-11-17 19:39:57.068290] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.919 19:39:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.919 19:39:57 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:58.919 19:39:57 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:58.919 19:39:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:58.919 19:39:57 -- nvmf/common.sh@520 -- # config=() 00:30:58.919 19:39:57 -- nvmf/common.sh@520 -- # local subsystem config 00:30:58.919 19:39:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:58.919 19:39:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.919 19:39:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:58.919 { 00:30:58.919 "params": { 00:30:58.919 "name": "Nvme$subsystem", 00:30:58.919 "trtype": "$TEST_TRANSPORT", 00:30:58.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.919 "adrfam": "ipv4", 00:30:58.919 "trsvcid": "$NVMF_PORT", 00:30:58.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.919 "hdgst": ${hdgst:-false}, 00:30:58.919 "ddgst": ${ddgst:-false} 00:30:58.919 }, 00:30:58.919 "method": "bdev_nvme_attach_controller" 00:30:58.919 } 00:30:58.919 EOF 00:30:58.919 )") 00:30:58.919 19:39:57 -- target/dif.sh@82 -- # gen_fio_conf 00:30:58.919 19:39:57 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.919 19:39:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:30:58.919 19:39:57 -- target/dif.sh@54 -- # local file 00:30:58.919 19:39:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:58.919 19:39:57 -- target/dif.sh@56 -- # cat 00:30:58.919 19:39:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:30:58.919 19:39:57 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.919 19:39:57 -- common/autotest_common.sh@1330 -- # shift 00:30:58.919 19:39:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:30:58.919 19:39:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.919 19:39:57 -- nvmf/common.sh@542 -- # cat 00:30:58.919 19:39:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:58.919 19:39:57 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.919 19:39:57 -- target/dif.sh@72 -- # (( file <= files )) 00:30:58.919 19:39:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:30:58.919 19:39:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:58.920 19:39:57 -- nvmf/common.sh@544 -- # jq . 00:30:58.920 19:39:57 -- nvmf/common.sh@545 -- # IFS=, 00:30:58.920 19:39:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:58.920 "params": { 00:30:58.920 "name": "Nvme0", 00:30:58.920 "trtype": "tcp", 00:30:58.920 "traddr": "10.0.0.2", 00:30:58.920 "adrfam": "ipv4", 00:30:58.920 "trsvcid": "4420", 00:30:58.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:58.920 "hdgst": false, 00:30:58.920 "ddgst": false 00:30:58.920 }, 00:30:58.920 "method": "bdev_nvme_attach_controller" 00:30:58.920 }' 00:30:58.920 19:39:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:58.920 19:39:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:58.920 19:39:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.920 19:39:57 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.920 19:39:57 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:30:58.920 19:39:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:30:58.920 19:39:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:30:58.920 19:39:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:30:58.920 19:39:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:58.920 19:39:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:59.178 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:59.178 fio-3.35 00:30:59.178 Starting 1 thread 00:30:59.178 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.745 [2024-11-17 19:39:57.734578] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:59.745 [2024-11-17 19:39:57.734644] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:09.806 00:31:09.806 filename0: (groupid=0, jobs=1): err= 0: pid=1341735: Sun Nov 17 19:40:07 2024 00:31:09.806 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10013msec) 00:31:09.806 slat (nsec): min=4011, max=25793, avg=9122.78, stdev=2332.77 00:31:09.806 clat (usec): min=603, max=48758, avg=40840.02, stdev=2623.86 00:31:09.806 lat (usec): min=611, max=48770, avg=40849.14, stdev=2623.83 00:31:09.806 clat percentiles (usec): 00:31:09.806 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:09.806 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:09.806 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:09.806 | 99.00th=[41157], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:31:09.806 | 99.99th=[48497] 00:31:09.806 bw ( KiB/s): min= 384, max= 416, per=99.62%, avg=390.40, stdev=13.13, samples=20 00:31:09.806 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:31:09.806 lat (usec) : 750=0.41% 00:31:09.806 lat (msec) : 50=99.59% 00:31:09.806 cpu : usr=91.56%, sys=8.16%, ctx=14, majf=0, minf=256 00:31:09.806 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.806 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.806 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:09.806 00:31:09.806 Run status group 0 (all jobs): 00:31:09.806 READ: bw=391KiB/s (401kB/s), 391KiB/s-391KiB/s (401kB/s-401kB/s), io=3920KiB (4014kB), run=10013-10013msec 00:31:09.806 19:40:08 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:09.806 19:40:08 -- target/dif.sh@43 -- # local sub 00:31:09.806 19:40:08 -- target/dif.sh@45 -- # for sub in "$@" 00:31:09.806 19:40:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:09.806 19:40:08 -- target/dif.sh@36 -- # local sub_id=0 00:31:09.806 19:40:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.806 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.806 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:09.806 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.806 19:40:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:09.806 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.806 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.067 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.067 00:31:10.067 real 0m11.034s 00:31:10.067 user 0m10.100s 00:31:10.067 sys 0m1.087s 00:31:10.067 19:40:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:10.067 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.067 ************************************ 00:31:10.067 END TEST fio_dif_1_default 00:31:10.067 ************************************ 00:31:10.067 19:40:08 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:10.067 19:40:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:10.067 19:40:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:10.067 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.067 ************************************ 00:31:10.067 START TEST fio_dif_1_multi_subsystems 00:31:10.067 ************************************ 00:31:10.067 19:40:08 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:31:10.067 19:40:08 -- target/dif.sh@92 -- # local files=1 00:31:10.067 19:40:08 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:10.067 19:40:08 -- target/dif.sh@28 -- # local sub 00:31:10.067 19:40:08 -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.067 19:40:08 -- target/dif.sh@31 -- # create_subsystem 0 00:31:10.067 19:40:08 -- target/dif.sh@18 -- # local sub_id=0 00:31:10.067 19:40:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:10.067 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.067 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.067 bdev_null0 00:31:10.067 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.067 19:40:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:10.067 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.067 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.067 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.067 19:40:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:10.067 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.067 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.067 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.067 19:40:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.067 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.067 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.067 [2024-11-17 19:40:08.134092] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.067 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.067 19:40:08 -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.067 19:40:08 -- target/dif.sh@31 -- # create_subsystem 1 00:31:10.067 19:40:08 -- target/dif.sh@18 -- # local sub_id=1 00:31:10.067 19:40:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:10.068 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.068 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.068 bdev_null1 00:31:10.068 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.068 19:40:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:10.068 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.068 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.068 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.068 19:40:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:10.068 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.068 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.068 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.068 19:40:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.068 19:40:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.068 19:40:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.068 19:40:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.068 19:40:08 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:10.068 19:40:08 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:10.068 19:40:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:10.068 19:40:08 -- nvmf/common.sh@520 -- # config=() 00:31:10.068 19:40:08 -- nvmf/common.sh@520 -- # local subsystem config 00:31:10.068 19:40:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:10.068 19:40:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:10.068 { 00:31:10.068 "params": { 00:31:10.068 "name": "Nvme$subsystem", 00:31:10.068 "trtype": "$TEST_TRANSPORT", 00:31:10.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.068 "adrfam": "ipv4", 00:31:10.068 "trsvcid": "$NVMF_PORT", 00:31:10.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.068 "hdgst": ${hdgst:-false}, 00:31:10.068 "ddgst": ${ddgst:-false} 00:31:10.068 }, 00:31:10.068 "method": "bdev_nvme_attach_controller" 00:31:10.068 } 00:31:10.068 EOF 00:31:10.068 )") 00:31:10.068 19:40:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.068 19:40:08 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.068 19:40:08 -- target/dif.sh@82 -- # gen_fio_conf 00:31:10.068 19:40:08 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:31:10.068 19:40:08 -- target/dif.sh@54 -- # local file 00:31:10.068 19:40:08 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.068 19:40:08 -- target/dif.sh@56 -- # cat 00:31:10.068 19:40:08 -- common/autotest_common.sh@1328 -- # local sanitizers 00:31:10.068 19:40:08 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.068 19:40:08 -- common/autotest_common.sh@1330 -- # shift 00:31:10.068 19:40:08 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:31:10.068 19:40:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.068 19:40:08 -- nvmf/common.sh@542 -- # cat 00:31:10.068 19:40:08 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.068 19:40:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:10.068 19:40:08 -- common/autotest_common.sh@1334 -- # grep libasan 00:31:10.068 19:40:08 -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.068 19:40:08 -- target/dif.sh@73 -- # cat 00:31:10.068 19:40:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:10.068 19:40:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:10.068 19:40:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:10.068 { 00:31:10.068 "params": { 00:31:10.068 "name": "Nvme$subsystem", 00:31:10.068 "trtype": "$TEST_TRANSPORT", 00:31:10.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.068 "adrfam": "ipv4", 00:31:10.068 "trsvcid": "$NVMF_PORT", 00:31:10.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.068 "hdgst": ${hdgst:-false}, 00:31:10.068 "ddgst": ${ddgst:-false} 00:31:10.068 }, 00:31:10.068 "method": "bdev_nvme_attach_controller" 00:31:10.068 } 00:31:10.068 EOF 00:31:10.068 )") 00:31:10.068 19:40:08 -- nvmf/common.sh@542 -- # cat 00:31:10.068 19:40:08 -- target/dif.sh@72 -- # (( file++ )) 00:31:10.068 19:40:08 -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.068 19:40:08 -- nvmf/common.sh@544 -- # jq . 00:31:10.068 19:40:08 -- nvmf/common.sh@545 -- # IFS=, 00:31:10.068 19:40:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:10.068 "params": { 00:31:10.068 "name": "Nvme0", 00:31:10.068 "trtype": "tcp", 00:31:10.068 "traddr": "10.0.0.2", 00:31:10.068 "adrfam": "ipv4", 00:31:10.068 "trsvcid": "4420", 00:31:10.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.068 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.068 "hdgst": false, 00:31:10.068 "ddgst": false 00:31:10.068 }, 00:31:10.068 "method": "bdev_nvme_attach_controller" 00:31:10.068 },{ 00:31:10.068 "params": { 00:31:10.068 "name": "Nvme1", 00:31:10.068 "trtype": "tcp", 00:31:10.068 "traddr": "10.0.0.2", 00:31:10.068 "adrfam": "ipv4", 00:31:10.068 "trsvcid": "4420", 00:31:10.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:10.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:10.068 "hdgst": false, 00:31:10.068 "ddgst": false 00:31:10.068 }, 00:31:10.068 "method": "bdev_nvme_attach_controller" 00:31:10.068 }' 00:31:10.068 19:40:08 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:10.068 19:40:08 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:10.068 19:40:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.068 19:40:08 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.068 19:40:08 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:31:10.068 19:40:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:10.068 19:40:08 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:10.068 19:40:08 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:10.068 19:40:08 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:10.068 19:40:08 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.327 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:10.327 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:10.327 fio-3.35 00:31:10.327 Starting 2 threads 00:31:10.327 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.261 [2024-11-17 19:40:09.169251] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:11.261 [2024-11-17 19:40:09.169313] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:21.230 00:31:21.230 filename0: (groupid=0, jobs=1): err= 0: pid=1343181: Sun Nov 17 19:40:19 2024 00:31:21.230 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10025msec) 00:31:21.230 slat (nsec): min=3348, max=85246, avg=10304.98, stdev=5261.35 00:31:21.230 clat (usec): min=688, max=46385, avg=40882.58, stdev=2606.81 00:31:21.230 lat (usec): min=696, max=46407, avg=40892.89, stdev=2606.80 00:31:21.230 clat percentiles (usec): 00:31:21.230 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:21.230 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:21.230 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:21.230 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:31:21.230 | 99.99th=[46400] 00:31:21.230 bw ( KiB/s): min= 384, max= 448, per=31.98%, avg=390.40, stdev=16.74, samples=20 00:31:21.230 iops : min= 96, max= 112, avg=97.60, stdev= 4.19, samples=20 00:31:21.230 lat (usec) : 750=0.41% 00:31:21.230 lat (msec) : 50=99.59% 00:31:21.230 cpu : usr=97.40%, sys=2.32%, ctx=15, majf=0, minf=185 00:31:21.230 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.230 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.231 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:21.231 filename1: (groupid=0, jobs=1): err= 0: pid=1343182: Sun Nov 17 19:40:19 2024 00:31:21.231 read: IOPS=207, BW=830KiB/s (850kB/s)(8304KiB/10002msec) 00:31:21.231 slat (usec): min=4, max=100, avg=10.54, stdev= 4.29 00:31:21.231 clat (usec): min=513, max=42399, avg=19238.26, stdev=20274.49 00:31:21.231 lat (usec): min=521, max=42412, avg=19248.80, stdev=20274.04 00:31:21.231 clat percentiles (usec): 00:31:21.231 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 611], 00:31:21.231 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 725], 60.00th=[41157], 00:31:21.231 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:21.231 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:21.231 | 99.99th=[42206] 00:31:21.231 bw ( KiB/s): min= 768, max= 1024, per=68.48%, avg=835.37, stdev=72.26, samples=19 00:31:21.231 iops : min= 192, max= 256, avg=208.84, stdev=18.07, samples=19 00:31:21.231 lat (usec) : 750=52.26%, 1000=1.88% 00:31:21.231 lat (msec) : 10=0.19%, 50=45.66% 00:31:21.231 cpu : usr=97.04%, sys=2.66%, ctx=23, majf=0, minf=235 00:31:21.231 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.231 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.231 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:21.231 00:31:21.231 Run status group 0 (all jobs): 00:31:21.231 READ: bw=1219KiB/s (1249kB/s), 391KiB/s-830KiB/s (400kB/s-850kB/s), io=11.9MiB (12.5MB), run=10002-10025msec 00:31:21.489 19:40:19 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:21.489 19:40:19 -- target/dif.sh@43 -- # local sub 00:31:21.489 19:40:19 -- target/dif.sh@45 -- # for sub in "$@" 00:31:21.489 19:40:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:21.489 19:40:19 -- target/dif.sh@36 -- # local sub_id=0 00:31:21.489 19:40:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:21.489 19:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.489 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.489 19:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.489 19:40:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:21.489 19:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.489 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.489 19:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.489 19:40:19 -- target/dif.sh@45 -- # for sub in "$@" 00:31:21.489 19:40:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:21.489 19:40:19 -- target/dif.sh@36 -- # local sub_id=1 00:31:21.489 19:40:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.489 19:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.489 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.489 19:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.489 19:40:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:21.489 19:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.489 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.489 19:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.489 00:31:21.489 real 0m11.438s 00:31:21.489 user 0m20.758s 00:31:21.489 sys 0m0.840s 00:31:21.489 19:40:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:21.489 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.489 ************************************ 00:31:21.489 END TEST fio_dif_1_multi_subsystems 00:31:21.489 ************************************ 00:31:21.489 19:40:19 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:21.489 19:40:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:21.489 19:40:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:21.489 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.489 ************************************ 00:31:21.489 START TEST fio_dif_rand_params 00:31:21.489 ************************************ 00:31:21.489 19:40:19 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:31:21.489 19:40:19 -- target/dif.sh@100 -- # local NULL_DIF 00:31:21.489 19:40:19 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:21.489 19:40:19 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:21.489 19:40:19 -- target/dif.sh@103 -- # bs=128k 00:31:21.489 19:40:19 -- target/dif.sh@103 -- # numjobs=3 00:31:21.489 19:40:19 -- target/dif.sh@103 -- # iodepth=3 00:31:21.489 19:40:19 -- target/dif.sh@103 -- # runtime=5 00:31:21.489 19:40:19 -- target/dif.sh@105 -- # create_subsystems 0 00:31:21.489 19:40:19 -- target/dif.sh@28 -- # local sub 00:31:21.489 19:40:19 -- target/dif.sh@30 -- # for sub in "$@" 00:31:21.489 19:40:19 -- target/dif.sh@31 -- # create_subsystem 0 00:31:21.489 19:40:19 -- target/dif.sh@18 -- # local sub_id=0 00:31:21.489 19:40:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:21.489 19:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.489 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.489 bdev_null0 00:31:21.489 19:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.489 19:40:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:21.489 19:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.489 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.489 19:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.490 19:40:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:21.490 19:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.490 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.490 19:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.490 19:40:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:21.490 19:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.490 19:40:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.490 [2024-11-17 19:40:19.598213] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.490 19:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.490 19:40:19 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:21.490 19:40:19 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:21.490 19:40:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:21.490 19:40:19 -- nvmf/common.sh@520 -- # config=() 00:31:21.490 19:40:19 -- nvmf/common.sh@520 -- # local subsystem config 00:31:21.490 19:40:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:21.490 19:40:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:21.490 { 00:31:21.490 "params": { 00:31:21.490 "name": "Nvme$subsystem", 00:31:21.490 "trtype": "$TEST_TRANSPORT", 00:31:21.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.490 "adrfam": "ipv4", 00:31:21.490 "trsvcid": "$NVMF_PORT", 00:31:21.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.490 "hdgst": ${hdgst:-false}, 00:31:21.490 "ddgst": ${ddgst:-false} 00:31:21.490 }, 00:31:21.490 "method": "bdev_nvme_attach_controller" 00:31:21.490 } 00:31:21.490 EOF 00:31:21.490 )") 00:31:21.490 19:40:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.490 19:40:19 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.490 19:40:19 -- target/dif.sh@82 -- # gen_fio_conf 00:31:21.490 19:40:19 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:31:21.490 19:40:19 -- target/dif.sh@54 -- # local file 00:31:21.490 19:40:19 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:21.490 19:40:19 -- common/autotest_common.sh@1328 -- # local sanitizers 00:31:21.490 19:40:19 -- target/dif.sh@56 -- # cat 00:31:21.490 19:40:19 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.490 19:40:19 -- common/autotest_common.sh@1330 -- # shift 00:31:21.490 19:40:19 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:31:21.490 19:40:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.490 19:40:19 -- nvmf/common.sh@542 -- # cat 00:31:21.490 19:40:19 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.490 19:40:19 -- common/autotest_common.sh@1334 -- # grep libasan 00:31:21.490 19:40:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:21.490 19:40:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:21.490 19:40:19 -- target/dif.sh@72 -- # (( file <= files )) 00:31:21.490 19:40:19 -- nvmf/common.sh@544 -- # jq . 00:31:21.490 19:40:19 -- nvmf/common.sh@545 -- # IFS=, 00:31:21.490 19:40:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:21.490 "params": { 00:31:21.490 "name": "Nvme0", 00:31:21.490 "trtype": "tcp", 00:31:21.490 "traddr": "10.0.0.2", 00:31:21.490 "adrfam": "ipv4", 00:31:21.490 "trsvcid": "4420", 00:31:21.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.490 "hdgst": false, 00:31:21.490 "ddgst": false 00:31:21.490 }, 00:31:21.490 "method": "bdev_nvme_attach_controller" 00:31:21.490 }' 00:31:21.490 19:40:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:21.490 19:40:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:21.490 19:40:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.490 19:40:19 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.490 19:40:19 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:31:21.490 19:40:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:21.490 19:40:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:21.490 19:40:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:21.490 19:40:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:21.490 19:40:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.750 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:21.750 ... 00:31:21.750 fio-3.35 00:31:21.750 Starting 3 threads 00:31:21.750 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.316 [2024-11-17 19:40:20.401476] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:22.316 [2024-11-17 19:40:20.401556] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:27.581 00:31:27.581 filename0: (groupid=0, jobs=1): err= 0: pid=1344617: Sun Nov 17 19:40:25 2024 00:31:27.581 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(151MiB/5046msec) 00:31:27.581 slat (nsec): min=4275, max=34461, avg=12977.10, stdev=2972.75 00:31:27.581 clat (usec): min=6349, max=51086, avg=12521.17, stdev=2401.32 00:31:27.581 lat (usec): min=6362, max=51099, avg=12534.15, stdev=2401.32 00:31:27.581 clat percentiles (usec): 00:31:27.581 | 1.00th=[ 7963], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10945], 00:31:27.581 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12518], 60.00th=[13042], 00:31:27.581 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14615], 95.00th=[15139], 00:31:27.581 | 99.00th=[16909], 99.50th=[17695], 99.90th=[47449], 99.95th=[51119], 00:31:27.581 | 99.99th=[51119] 00:31:27.581 bw ( KiB/s): min=29696, max=31488, per=34.95%, avg=30751.80, stdev=626.33, samples=10 00:31:27.581 iops : min= 232, max= 246, avg=240.20, stdev= 4.85, samples=10 00:31:27.581 lat (msec) : 10=7.81%, 20=91.78%, 50=0.33%, 100=0.08% 00:31:27.581 cpu : usr=92.27%, sys=7.23%, ctx=9, majf=0, minf=78 00:31:27.581 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.581 issued rwts: total=1204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.581 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:27.581 filename0: (groupid=0, jobs=1): err= 0: pid=1344618: Sun Nov 17 19:40:25 2024 00:31:27.581 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(138MiB/5045msec) 00:31:27.581 slat (nsec): min=4492, max=36191, avg=13449.06, stdev=2958.88 00:31:27.581 clat (usec): min=7510, max=95252, avg=13704.11, stdev=5470.68 00:31:27.581 lat (usec): min=7523, max=95265, avg=13717.56, stdev=5470.65 00:31:27.581 clat percentiles (usec): 00:31:27.581 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10945], 20.00th=[11600], 00:31:27.581 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13173], 60.00th=[13566], 00:31:27.581 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15401], 95.00th=[16188], 00:31:27.581 | 99.00th=[51119], 99.50th=[53740], 99.90th=[55313], 99.95th=[94897], 00:31:27.581 | 99.99th=[94897] 00:31:27.581 bw ( KiB/s): min=20480, max=31232, per=31.91%, avg=28083.20, stdev=3163.21, samples=10 00:31:27.581 iops : min= 160, max= 244, avg=219.40, stdev=24.71, samples=10 00:31:27.581 lat (msec) : 10=3.18%, 20=95.36%, 50=0.09%, 100=1.36% 00:31:27.581 cpu : usr=92.62%, sys=6.88%, ctx=10, majf=0, minf=102 00:31:27.581 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.581 issued rwts: total=1100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.581 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:27.581 filename0: (groupid=0, jobs=1): err= 0: pid=1344619: Sun Nov 17 19:40:25 2024 00:31:27.581 read: IOPS=230, BW=28.9MiB/s (30.3MB/s)(146MiB/5044msec) 00:31:27.582 slat (nsec): min=4877, max=60566, avg=13191.46, stdev=3162.46 00:31:27.582 clat (usec): min=6233, max=55919, avg=12934.04, stdev=3891.83 00:31:27.582 lat (usec): min=6246, max=55932, avg=12947.23, stdev=3891.86 00:31:27.582 clat percentiles (usec): 00:31:27.582 | 1.00th=[ 7177], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10814], 00:31:27.582 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12911], 60.00th=[13435], 00:31:27.582 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15270], 95.00th=[16057], 00:31:27.582 | 99.00th=[17695], 99.50th=[51643], 99.90th=[55837], 99.95th=[55837], 00:31:27.582 | 99.99th=[55837] 00:31:27.582 bw ( KiB/s): min=27392, max=32512, per=33.83%, avg=29772.80, stdev=2169.05, samples=10 00:31:27.582 iops : min= 214, max= 254, avg=232.60, stdev=16.95, samples=10 00:31:27.582 lat (msec) : 10=9.96%, 20=89.36%, 50=0.09%, 100=0.60% 00:31:27.582 cpu : usr=92.15%, sys=7.34%, ctx=11, majf=0, minf=138 00:31:27.582 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.582 issued rwts: total=1165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.582 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:27.582 00:31:27.582 Run status group 0 (all jobs): 00:31:27.582 READ: bw=85.9MiB/s (90.1MB/s), 27.3MiB/s-29.8MiB/s (28.6MB/s-31.3MB/s), io=434MiB (455MB), run=5044-5046msec 00:31:27.582 19:40:25 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:27.582 19:40:25 -- target/dif.sh@43 -- # local sub 00:31:27.582 19:40:25 -- target/dif.sh@45 -- # for sub in "$@" 00:31:27.582 19:40:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:27.582 19:40:25 -- target/dif.sh@36 -- # local sub_id=0 00:31:27.582 19:40:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:27.582 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.582 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:27.841 19:40:25 -- target/dif.sh@109 -- # bs=4k 00:31:27.841 19:40:25 -- target/dif.sh@109 -- # numjobs=8 00:31:27.841 19:40:25 -- target/dif.sh@109 -- # iodepth=16 00:31:27.841 19:40:25 -- target/dif.sh@109 -- # runtime= 00:31:27.841 19:40:25 -- target/dif.sh@109 -- # files=2 00:31:27.841 19:40:25 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:27.841 19:40:25 -- target/dif.sh@28 -- # local sub 00:31:27.841 19:40:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.841 19:40:25 -- target/dif.sh@31 -- # create_subsystem 0 00:31:27.841 19:40:25 -- target/dif.sh@18 -- # local sub_id=0 00:31:27.841 19:40:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 bdev_null0 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 [2024-11-17 19:40:25.892246] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.841 19:40:25 -- target/dif.sh@31 -- # create_subsystem 1 00:31:27.841 19:40:25 -- target/dif.sh@18 -- # local sub_id=1 00:31:27.841 19:40:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 bdev_null1 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.841 19:40:25 -- target/dif.sh@31 -- # create_subsystem 2 00:31:27.841 19:40:25 -- target/dif.sh@18 -- # local sub_id=2 00:31:27.841 19:40:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 bdev_null2 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:27.841 19:40:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.841 19:40:25 -- common/autotest_common.sh@10 -- # set +x 00:31:27.841 19:40:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.841 19:40:25 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:27.841 19:40:25 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:27.841 19:40:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:27.841 19:40:25 -- nvmf/common.sh@520 -- # config=() 00:31:27.841 19:40:25 -- nvmf/common.sh@520 -- # local subsystem config 00:31:27.841 19:40:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:27.841 19:40:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.841 19:40:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:27.841 { 00:31:27.841 "params": { 00:31:27.841 "name": "Nvme$subsystem", 00:31:27.841 "trtype": "$TEST_TRANSPORT", 00:31:27.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.841 "adrfam": "ipv4", 00:31:27.841 "trsvcid": "$NVMF_PORT", 00:31:27.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.841 "hdgst": ${hdgst:-false}, 00:31:27.841 "ddgst": ${ddgst:-false} 00:31:27.841 }, 00:31:27.841 "method": "bdev_nvme_attach_controller" 00:31:27.841 } 00:31:27.841 EOF 00:31:27.841 )") 00:31:27.841 19:40:25 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.841 19:40:25 -- target/dif.sh@82 -- # gen_fio_conf 00:31:27.841 19:40:25 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:31:27.841 19:40:25 -- target/dif.sh@54 -- # local file 00:31:27.841 19:40:25 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:27.841 19:40:25 -- target/dif.sh@56 -- # cat 00:31:27.841 19:40:25 -- common/autotest_common.sh@1328 -- # local sanitizers 00:31:27.841 19:40:25 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.841 19:40:25 -- common/autotest_common.sh@1330 -- # shift 00:31:27.841 19:40:25 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:31:27.841 19:40:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.841 19:40:25 -- nvmf/common.sh@542 -- # cat 00:31:27.841 19:40:25 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.841 19:40:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:27.841 19:40:25 -- common/autotest_common.sh@1334 -- # grep libasan 00:31:27.841 19:40:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.841 19:40:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:27.841 19:40:25 -- target/dif.sh@73 -- # cat 00:31:27.841 19:40:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:27.841 19:40:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:27.841 { 00:31:27.841 "params": { 00:31:27.841 "name": "Nvme$subsystem", 00:31:27.841 "trtype": "$TEST_TRANSPORT", 00:31:27.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.841 "adrfam": "ipv4", 00:31:27.841 "trsvcid": "$NVMF_PORT", 00:31:27.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.841 "hdgst": ${hdgst:-false}, 00:31:27.841 "ddgst": ${ddgst:-false} 00:31:27.841 }, 00:31:27.841 "method": "bdev_nvme_attach_controller" 00:31:27.841 } 00:31:27.841 EOF 00:31:27.841 )") 00:31:27.841 19:40:25 -- nvmf/common.sh@542 -- # cat 00:31:27.841 19:40:25 -- target/dif.sh@72 -- # (( file++ )) 00:31:27.841 19:40:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.841 19:40:25 -- target/dif.sh@73 -- # cat 00:31:27.841 19:40:25 -- target/dif.sh@72 -- # (( file++ )) 00:31:27.841 19:40:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.841 19:40:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:27.841 19:40:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:27.841 { 00:31:27.841 "params": { 00:31:27.841 "name": "Nvme$subsystem", 00:31:27.841 "trtype": "$TEST_TRANSPORT", 00:31:27.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.841 "adrfam": "ipv4", 00:31:27.841 "trsvcid": "$NVMF_PORT", 00:31:27.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.842 "hdgst": ${hdgst:-false}, 00:31:27.842 "ddgst": ${ddgst:-false} 00:31:27.842 }, 00:31:27.842 "method": "bdev_nvme_attach_controller" 00:31:27.842 } 00:31:27.842 EOF 00:31:27.842 )") 00:31:27.842 19:40:25 -- nvmf/common.sh@542 -- # cat 00:31:27.842 19:40:25 -- nvmf/common.sh@544 -- # jq . 00:31:27.842 19:40:25 -- nvmf/common.sh@545 -- # IFS=, 00:31:27.842 19:40:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:27.842 "params": { 00:31:27.842 "name": "Nvme0", 00:31:27.842 "trtype": "tcp", 00:31:27.842 "traddr": "10.0.0.2", 00:31:27.842 "adrfam": "ipv4", 00:31:27.842 "trsvcid": "4420", 00:31:27.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.842 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.842 "hdgst": false, 00:31:27.842 "ddgst": false 00:31:27.842 }, 00:31:27.842 "method": "bdev_nvme_attach_controller" 00:31:27.842 },{ 00:31:27.842 "params": { 00:31:27.842 "name": "Nvme1", 00:31:27.842 "trtype": "tcp", 00:31:27.842 "traddr": "10.0.0.2", 00:31:27.842 "adrfam": "ipv4", 00:31:27.842 "trsvcid": "4420", 00:31:27.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.842 "hdgst": false, 00:31:27.842 "ddgst": false 00:31:27.842 }, 00:31:27.842 "method": "bdev_nvme_attach_controller" 00:31:27.842 },{ 00:31:27.842 "params": { 00:31:27.842 "name": "Nvme2", 00:31:27.842 "trtype": "tcp", 00:31:27.842 "traddr": "10.0.0.2", 00:31:27.842 "adrfam": "ipv4", 00:31:27.842 "trsvcid": "4420", 00:31:27.842 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:27.842 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:27.842 "hdgst": false, 00:31:27.842 "ddgst": false 00:31:27.842 }, 00:31:27.842 "method": "bdev_nvme_attach_controller" 00:31:27.842 }' 00:31:27.842 19:40:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:27.842 19:40:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:27.842 19:40:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.842 19:40:25 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.842 19:40:25 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:31:27.842 19:40:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:27.842 19:40:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:27.842 19:40:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:27.842 19:40:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:27.842 19:40:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.100 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:28.100 ... 00:31:28.100 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:28.100 ... 00:31:28.100 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:28.100 ... 00:31:28.100 fio-3.35 00:31:28.100 Starting 24 threads 00:31:28.100 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.042 [2024-11-17 19:40:26.988059] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:29.042 [2024-11-17 19:40:26.988117] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:41.246 00:31:41.246 filename0: (groupid=0, jobs=1): err= 0: pid=1345504: Sun Nov 17 19:40:37 2024 00:31:41.246 read: IOPS=40, BW=164KiB/s (167kB/s)(1664KiB/10175msec) 00:31:41.246 slat (usec): min=8, max=147, avg=19.64, stdev=13.87 00:31:41.246 clat (msec): min=154, max=762, avg=389.89, stdev=125.53 00:31:41.246 lat (msec): min=154, max=762, avg=389.91, stdev=125.53 00:31:41.246 clat percentiles (msec): 00:31:41.246 | 1.00th=[ 203], 5.00th=[ 209], 10.00th=[ 245], 20.00th=[ 275], 00:31:41.246 | 30.00th=[ 296], 40.00th=[ 342], 50.00th=[ 388], 60.00th=[ 397], 00:31:41.246 | 70.00th=[ 439], 80.00th=[ 542], 90.00th=[ 575], 95.00th=[ 584], 00:31:41.246 | 99.00th=[ 735], 99.50th=[ 735], 99.90th=[ 760], 99.95th=[ 760], 00:31:41.246 | 99.99th=[ 760] 00:31:41.246 bw ( KiB/s): min= 112, max= 256, per=3.71%, avg=168.42, stdev=56.28, samples=19 00:31:41.246 iops : min= 28, max= 64, avg=42.11, stdev=14.07, samples=19 00:31:41.246 lat (msec) : 250=15.38%, 500=60.10%, 750=24.04%, 1000=0.48% 00:31:41.246 cpu : usr=98.90%, sys=0.67%, ctx=14, majf=0, minf=40 00:31:41.246 IO depths : 1=2.6%, 2=8.2%, 4=22.8%, 8=56.5%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:41.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.246 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.246 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.246 filename0: (groupid=0, jobs=1): err= 0: pid=1345505: Sun Nov 17 19:40:37 2024 00:31:41.246 read: IOPS=45, BW=182KiB/s (187kB/s)(1856KiB/10179msec) 00:31:41.246 slat (usec): min=5, max=153, avg=36.66, stdev=38.08 00:31:41.246 clat (msec): min=174, max=601, avg=348.94, stdev=62.89 00:31:41.246 lat (msec): min=174, max=601, avg=348.98, stdev=62.89 00:31:41.246 clat percentiles (msec): 00:31:41.246 | 1.00th=[ 205], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 292], 00:31:41.246 | 30.00th=[ 330], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 372], 00:31:41.246 | 70.00th=[ 388], 80.00th=[ 401], 90.00th=[ 405], 95.00th=[ 414], 00:31:41.246 | 99.00th=[ 435], 99.50th=[ 550], 99.90th=[ 600], 99.95th=[ 600], 00:31:41.246 | 99.99th=[ 600] 00:31:41.246 bw ( KiB/s): min= 112, max= 256, per=3.95%, avg=179.20, stdev=60.00, samples=20 00:31:41.246 iops : min= 28, max= 64, avg=44.80, stdev=15.00, samples=20 00:31:41.246 lat (msec) : 250=11.21%, 500=87.93%, 750=0.86% 00:31:41.246 cpu : usr=98.91%, sys=0.63%, ctx=30, majf=0, minf=49 00:31:41.246 IO depths : 1=2.2%, 2=8.4%, 4=25.0%, 8=54.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:31:41.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.246 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.246 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.246 filename0: (groupid=0, jobs=1): err= 0: pid=1345506: Sun Nov 17 19:40:37 2024 00:31:41.246 read: IOPS=56, BW=227KiB/s (233kB/s)(2320KiB/10210msec) 00:31:41.246 slat (usec): min=8, max=102, avg=17.39, stdev=16.30 00:31:41.246 clat (msec): min=10, max=598, avg=279.14, stdev=115.95 00:31:41.246 lat (msec): min=10, max=598, avg=279.15, stdev=115.95 00:31:41.246 clat percentiles (msec): 00:31:41.246 | 1.00th=[ 11], 5.00th=[ 16], 10.00th=[ 165], 20.00th=[ 184], 00:31:41.246 | 30.00th=[ 218], 40.00th=[ 234], 50.00th=[ 271], 60.00th=[ 313], 00:31:41.246 | 70.00th=[ 359], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 426], 00:31:41.246 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:31:41.246 | 99.99th=[ 600] 00:31:41.246 bw ( KiB/s): min= 96, max= 513, per=4.97%, avg=225.65, stdev=100.39, samples=20 00:31:41.246 iops : min= 24, max= 128, avg=56.40, stdev=25.06, samples=20 00:31:41.246 lat (msec) : 20=5.52%, 250=37.59%, 500=54.83%, 750=2.07% 00:31:41.246 cpu : usr=98.55%, sys=1.03%, ctx=14, majf=0, minf=60 00:31:41.246 IO depths : 1=1.0%, 2=3.4%, 4=13.1%, 8=70.7%, 16=11.7%, 32=0.0%, >=64=0.0% 00:31:41.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.246 complete : 0=0.0%, 4=90.6%, 8=4.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.246 issued rwts: total=580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.246 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.246 filename0: (groupid=0, jobs=1): err= 0: pid=1345507: Sun Nov 17 19:40:37 2024 00:31:41.246 read: IOPS=39, BW=157KiB/s (161kB/s)(1600KiB/10178msec) 00:31:41.246 slat (usec): min=8, max=156, avg=50.46, stdev=37.67 00:31:41.246 clat (msec): min=208, max=788, avg=406.63, stdev=136.43 00:31:41.246 lat (msec): min=208, max=788, avg=406.68, stdev=136.41 00:31:41.246 clat percentiles (msec): 00:31:41.247 | 1.00th=[ 209], 5.00th=[ 236], 10.00th=[ 245], 20.00th=[ 275], 00:31:41.247 | 30.00th=[ 296], 40.00th=[ 334], 50.00th=[ 368], 60.00th=[ 414], 00:31:41.247 | 70.00th=[ 542], 80.00th=[ 567], 90.00th=[ 592], 95.00th=[ 600], 00:31:41.247 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 793], 99.95th=[ 793], 00:31:41.247 | 99.99th=[ 793] 00:31:41.247 bw ( KiB/s): min= 128, max= 256, per=3.75%, avg=170.67, stdev=60.37, samples=18 00:31:41.247 iops : min= 32, max= 64, avg=42.67, stdev=15.09, samples=18 00:31:41.247 lat (msec) : 250=12.00%, 500=52.50%, 750=35.00%, 1000=0.50% 00:31:41.247 cpu : usr=98.71%, sys=0.75%, ctx=28, majf=0, minf=41 00:31:41.247 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:41.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.247 filename0: (groupid=0, jobs=1): err= 0: pid=1345508: Sun Nov 17 19:40:37 2024 00:31:41.247 read: IOPS=43, BW=176KiB/s (180kB/s)(1792KiB/10190msec) 00:31:41.247 slat (usec): min=8, max=157, avg=70.36, stdev=44.35 00:31:41.247 clat (msec): min=142, max=790, avg=363.35, stdev=122.10 00:31:41.247 lat (msec): min=142, max=791, avg=363.42, stdev=122.11 00:31:41.247 clat percentiles (msec): 00:31:41.247 | 1.00th=[ 142], 5.00th=[ 209], 10.00th=[ 220], 20.00th=[ 249], 00:31:41.247 | 30.00th=[ 292], 40.00th=[ 330], 50.00th=[ 368], 60.00th=[ 376], 00:31:41.247 | 70.00th=[ 401], 80.00th=[ 435], 90.00th=[ 584], 95.00th=[ 600], 00:31:41.247 | 99.00th=[ 600], 99.50th=[ 617], 99.90th=[ 793], 99.95th=[ 793], 00:31:41.247 | 99.99th=[ 793] 00:31:41.247 bw ( KiB/s): min= 112, max= 256, per=3.80%, avg=172.80, stdev=61.33, samples=20 00:31:41.247 iops : min= 28, max= 64, avg=43.20, stdev=15.33, samples=20 00:31:41.247 lat (msec) : 250=22.77%, 500=58.93%, 750=17.86%, 1000=0.45% 00:31:41.247 cpu : usr=98.64%, sys=0.90%, ctx=24, majf=0, minf=43 00:31:41.247 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:31:41.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.247 filename0: (groupid=0, jobs=1): err= 0: pid=1345509: Sun Nov 17 19:40:37 2024 00:31:41.247 read: IOPS=49, BW=197KiB/s (202kB/s)(2008KiB/10186msec) 00:31:41.247 slat (usec): min=8, max=180, avg=32.34, stdev=33.61 00:31:41.247 clat (msec): min=141, max=584, avg=322.93, stdev=82.08 00:31:41.247 lat (msec): min=141, max=584, avg=322.97, stdev=82.08 00:31:41.247 clat percentiles (msec): 00:31:41.247 | 1.00th=[ 142], 5.00th=[ 165], 10.00th=[ 207], 20.00th=[ 255], 00:31:41.247 | 30.00th=[ 288], 40.00th=[ 309], 50.00th=[ 334], 60.00th=[ 359], 00:31:41.247 | 70.00th=[ 384], 80.00th=[ 397], 90.00th=[ 405], 95.00th=[ 414], 00:31:41.247 | 99.00th=[ 498], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:31:41.247 | 99.99th=[ 584] 00:31:41.247 bw ( KiB/s): min= 128, max= 384, per=4.28%, avg=194.40, stdev=64.06, samples=20 00:31:41.247 iops : min= 32, max= 96, avg=48.60, stdev=16.01, samples=20 00:31:41.247 lat (msec) : 250=19.52%, 500=79.68%, 750=0.80% 00:31:41.247 cpu : usr=98.76%, sys=0.77%, ctx=11, majf=0, minf=45 00:31:41.247 IO depths : 1=1.4%, 2=4.4%, 4=15.1%, 8=67.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:31:41.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 complete : 0=0.0%, 4=91.2%, 8=3.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 issued rwts: total=502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.247 filename0: (groupid=0, jobs=1): err= 0: pid=1345510: Sun Nov 17 19:40:37 2024 00:31:41.247 read: IOPS=62, BW=249KiB/s (255kB/s)(2544KiB/10216msec) 00:31:41.247 slat (nsec): min=3434, max=47794, avg=11494.44, stdev=6262.91 00:31:41.247 clat (msec): min=9, max=412, avg=254.56, stdev=115.67 00:31:41.247 lat (msec): min=9, max=412, avg=254.57, stdev=115.67 00:31:41.247 clat percentiles (msec): 00:31:41.247 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 96], 20.00th=[ 174], 00:31:41.247 | 30.00th=[ 186], 40.00th=[ 207], 50.00th=[ 259], 60.00th=[ 309], 00:31:41.247 | 70.00th=[ 355], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 401], 00:31:41.247 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:31:41.247 | 99.99th=[ 414] 00:31:41.247 bw ( KiB/s): min= 128, max= 768, per=5.47%, avg=248.00, stdev=144.84, samples=20 00:31:41.247 iops : min= 32, max= 192, avg=62.00, stdev=36.21, samples=20 00:31:41.247 lat (msec) : 10=2.52%, 20=5.03%, 100=3.14%, 250=37.42%, 500=51.89% 00:31:41.247 cpu : usr=98.66%, sys=0.95%, ctx=15, majf=0, minf=45 00:31:41.247 IO depths : 1=0.8%, 2=1.9%, 4=9.3%, 8=76.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:31:41.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 complete : 0=0.0%, 4=89.6%, 8=4.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 issued rwts: total=636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.247 filename0: (groupid=0, jobs=1): err= 0: pid=1345511: Sun Nov 17 19:40:37 2024 00:31:41.247 read: IOPS=59, BW=237KiB/s (243kB/s)(2424KiB/10210msec) 00:31:41.247 slat (nsec): min=7569, max=85297, avg=15438.96, stdev=11107.92 00:31:41.247 clat (msec): min=10, max=416, avg=268.95, stdev=104.20 00:31:41.247 lat (msec): min=10, max=416, avg=268.97, stdev=104.19 00:31:41.247 clat percentiles (msec): 00:31:41.247 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 138], 20.00th=[ 197], 00:31:41.247 | 30.00th=[ 209], 40.00th=[ 228], 50.00th=[ 264], 60.00th=[ 313], 00:31:41.247 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 397], 95.00th=[ 401], 00:31:41.247 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:31:41.247 | 99.99th=[ 418] 00:31:41.247 bw ( KiB/s): min= 128, max= 512, per=5.21%, avg=236.00, stdev=106.62, samples=20 00:31:41.247 iops : min= 32, max= 128, avg=59.00, stdev=26.66, samples=20 00:31:41.247 lat (msec) : 20=5.28%, 250=41.91%, 500=52.81% 00:31:41.247 cpu : usr=98.76%, sys=0.78%, ctx=17, majf=0, minf=46 00:31:41.247 IO depths : 1=1.8%, 2=8.1%, 4=25.1%, 8=54.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:31:41.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.247 filename1: (groupid=0, jobs=1): err= 0: pid=1345512: Sun Nov 17 19:40:37 2024 00:31:41.247 read: IOPS=37, BW=152KiB/s (156kB/s)(1536KiB/10114msec) 00:31:41.247 slat (usec): min=8, max=101, avg=20.80, stdev=17.50 00:31:41.247 clat (msec): min=208, max=815, avg=421.24, stdev=139.63 00:31:41.247 lat (msec): min=208, max=815, avg=421.26, stdev=139.62 00:31:41.247 clat percentiles (msec): 00:31:41.247 | 1.00th=[ 209], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 279], 00:31:41.247 | 30.00th=[ 296], 40.00th=[ 347], 50.00th=[ 401], 60.00th=[ 514], 00:31:41.247 | 70.00th=[ 550], 80.00th=[ 575], 90.00th=[ 584], 95.00th=[ 617], 00:31:41.247 | 99.00th=[ 768], 99.50th=[ 818], 99.90th=[ 818], 99.95th=[ 818], 00:31:41.247 | 99.99th=[ 818] 00:31:41.247 bw ( KiB/s): min= 16, max= 256, per=3.40%, avg=154.95, stdev=67.05, samples=19 00:31:41.247 iops : min= 4, max= 64, avg=38.74, stdev=16.76, samples=19 00:31:41.247 lat (msec) : 250=12.50%, 500=46.88%, 750=39.58%, 1000=1.04% 00:31:41.247 cpu : usr=98.82%, sys=0.74%, ctx=16, majf=0, minf=53 00:31:41.247 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:41.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.247 filename1: (groupid=0, jobs=1): err= 0: pid=1345513: Sun Nov 17 19:40:37 2024 00:31:41.247 read: IOPS=58, BW=233KiB/s (238kB/s)(2376KiB/10210msec) 00:31:41.247 slat (nsec): min=3904, max=47489, avg=13093.84, stdev=7118.34 00:31:41.247 clat (msec): min=10, max=577, avg=272.67, stdev=109.02 00:31:41.247 lat (msec): min=10, max=577, avg=272.68, stdev=109.02 00:31:41.247 clat percentiles (msec): 00:31:41.247 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 142], 20.00th=[ 197], 00:31:41.247 | 30.00th=[ 209], 40.00th=[ 228], 50.00th=[ 271], 60.00th=[ 313], 00:31:41.247 | 70.00th=[ 359], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 405], 00:31:41.247 | 99.00th=[ 477], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 575], 00:31:41.247 | 99.99th=[ 575] 00:31:41.247 bw ( KiB/s): min= 128, max= 512, per=5.10%, avg=231.20, stdev=100.72, samples=20 00:31:41.247 iops : min= 32, max= 128, avg=57.80, stdev=25.18, samples=20 00:31:41.247 lat (msec) : 20=5.39%, 250=39.39%, 500=54.55%, 750=0.67% 00:31:41.247 cpu : usr=98.87%, sys=0.74%, ctx=15, majf=0, minf=50 00:31:41.247 IO depths : 1=1.2%, 2=4.0%, 4=14.6%, 8=68.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:41.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 complete : 0=0.0%, 4=91.1%, 8=3.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.247 filename1: (groupid=0, jobs=1): err= 0: pid=1345514: Sun Nov 17 19:40:37 2024 00:31:41.247 read: IOPS=39, BW=157KiB/s (161kB/s)(1600KiB/10175msec) 00:31:41.247 slat (usec): min=8, max=151, avg=66.28, stdev=39.77 00:31:41.247 clat (msec): min=155, max=782, avg=406.30, stdev=128.27 00:31:41.247 lat (msec): min=155, max=782, avg=406.36, stdev=128.26 00:31:41.247 clat percentiles (msec): 00:31:41.247 | 1.00th=[ 209], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 279], 00:31:41.247 | 30.00th=[ 296], 40.00th=[ 347], 50.00th=[ 401], 60.00th=[ 414], 00:31:41.247 | 70.00th=[ 514], 80.00th=[ 550], 90.00th=[ 584], 95.00th=[ 592], 00:31:41.247 | 99.00th=[ 600], 99.50th=[ 684], 99.90th=[ 785], 99.95th=[ 785], 00:31:41.247 | 99.99th=[ 785] 00:31:41.247 bw ( KiB/s): min= 112, max= 256, per=3.75%, avg=170.67, stdev=62.33, samples=18 00:31:41.247 iops : min= 28, max= 64, avg=42.67, stdev=15.58, samples=18 00:31:41.247 lat (msec) : 250=12.00%, 500=56.50%, 750=31.00%, 1000=0.50% 00:31:41.247 cpu : usr=98.51%, sys=0.95%, ctx=23, majf=0, minf=47 00:31:41.247 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:41.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.247 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.247 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.247 filename1: (groupid=0, jobs=1): err= 0: pid=1345515: Sun Nov 17 19:40:37 2024 00:31:41.247 read: IOPS=48, BW=195KiB/s (199kB/s)(1984KiB/10190msec) 00:31:41.247 slat (usec): min=8, max=125, avg=33.48, stdev=31.44 00:31:41.247 clat (msec): min=141, max=536, avg=326.65, stdev=68.87 00:31:41.248 lat (msec): min=142, max=536, avg=326.69, stdev=68.86 00:31:41.248 clat percentiles (msec): 00:31:41.248 | 1.00th=[ 142], 5.00th=[ 209], 10.00th=[ 239], 20.00th=[ 275], 00:31:41.248 | 30.00th=[ 292], 40.00th=[ 305], 50.00th=[ 342], 60.00th=[ 363], 00:31:41.248 | 70.00th=[ 372], 80.00th=[ 393], 90.00th=[ 401], 95.00th=[ 405], 00:31:41.248 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 542], 99.95th=[ 542], 00:31:41.248 | 99.99th=[ 542] 00:31:41.248 bw ( KiB/s): min= 128, max= 256, per=4.22%, avg=192.00, stdev=59.64, samples=20 00:31:41.248 iops : min= 32, max= 64, avg=48.00, stdev=14.91, samples=20 00:31:41.248 lat (msec) : 250=16.13%, 500=83.47%, 750=0.40% 00:31:41.248 cpu : usr=98.46%, sys=0.92%, ctx=31, majf=0, minf=45 00:31:41.248 IO depths : 1=3.0%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:31:41.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.248 filename1: (groupid=0, jobs=1): err= 0: pid=1345516: Sun Nov 17 19:40:37 2024 00:31:41.248 read: IOPS=47, BW=189KiB/s (193kB/s)(1920KiB/10182msec) 00:31:41.248 slat (usec): min=8, max=146, avg=35.72, stdev=37.37 00:31:41.248 clat (msec): min=208, max=533, avg=338.98, stdev=65.05 00:31:41.248 lat (msec): min=208, max=533, avg=339.02, stdev=65.05 00:31:41.248 clat percentiles (msec): 00:31:41.248 | 1.00th=[ 209], 5.00th=[ 226], 10.00th=[ 239], 20.00th=[ 271], 00:31:41.248 | 30.00th=[ 292], 40.00th=[ 321], 50.00th=[ 351], 60.00th=[ 372], 00:31:41.248 | 70.00th=[ 393], 80.00th=[ 401], 90.00th=[ 405], 95.00th=[ 418], 00:31:41.248 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 535], 99.95th=[ 535], 00:31:41.248 | 99.99th=[ 535] 00:31:41.248 bw ( KiB/s): min= 128, max= 256, per=4.08%, avg=185.60, stdev=65.33, samples=20 00:31:41.248 iops : min= 32, max= 64, avg=46.40, stdev=16.33, samples=20 00:31:41.248 lat (msec) : 250=16.67%, 500=82.92%, 750=0.42% 00:31:41.248 cpu : usr=98.75%, sys=0.64%, ctx=69, majf=0, minf=37 00:31:41.248 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:41.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.248 filename1: (groupid=0, jobs=1): err= 0: pid=1345517: Sun Nov 17 19:40:37 2024 00:31:41.248 read: IOPS=47, BW=188KiB/s (193kB/s)(1920KiB/10186msec) 00:31:41.248 slat (usec): min=8, max=144, avg=45.79, stdev=41.01 00:31:41.248 clat (msec): min=141, max=557, avg=337.43, stdev=81.50 00:31:41.248 lat (msec): min=141, max=557, avg=337.48, stdev=81.49 00:31:41.248 clat percentiles (msec): 00:31:41.248 | 1.00th=[ 142], 5.00th=[ 182], 10.00th=[ 209], 20.00th=[ 275], 00:31:41.248 | 30.00th=[ 292], 40.00th=[ 342], 50.00th=[ 363], 60.00th=[ 384], 00:31:41.248 | 70.00th=[ 393], 80.00th=[ 401], 90.00th=[ 409], 95.00th=[ 439], 00:31:41.248 | 99.00th=[ 493], 99.50th=[ 518], 99.90th=[ 558], 99.95th=[ 558], 00:31:41.248 | 99.99th=[ 558] 00:31:41.248 bw ( KiB/s): min= 128, max= 368, per=4.08%, avg=185.60, stdev=69.53, samples=20 00:31:41.248 iops : min= 32, max= 92, avg=46.40, stdev=17.38, samples=20 00:31:41.248 lat (msec) : 250=17.92%, 500=81.25%, 750=0.83% 00:31:41.248 cpu : usr=97.99%, sys=1.13%, ctx=190, majf=0, minf=29 00:31:41.248 IO depths : 1=1.9%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:31:41.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.248 filename1: (groupid=0, jobs=1): err= 0: pid=1345518: Sun Nov 17 19:40:37 2024 00:31:41.248 read: IOPS=37, BW=151KiB/s (155kB/s)(1536KiB/10176msec) 00:31:41.248 slat (nsec): min=8183, max=92880, avg=20337.39, stdev=17771.68 00:31:41.248 clat (msec): min=137, max=802, avg=423.71, stdev=139.33 00:31:41.248 lat (msec): min=138, max=802, avg=423.73, stdev=139.32 00:31:41.248 clat percentiles (msec): 00:31:41.248 | 1.00th=[ 165], 5.00th=[ 239], 10.00th=[ 257], 20.00th=[ 284], 00:31:41.248 | 30.00th=[ 309], 40.00th=[ 384], 50.00th=[ 397], 60.00th=[ 477], 00:31:41.248 | 70.00th=[ 531], 80.00th=[ 575], 90.00th=[ 592], 95.00th=[ 634], 00:31:41.248 | 99.00th=[ 793], 99.50th=[ 802], 99.90th=[ 802], 99.95th=[ 802], 00:31:41.248 | 99.99th=[ 802] 00:31:41.248 bw ( KiB/s): min= 16, max= 256, per=3.24%, avg=147.20, stdev=59.55, samples=20 00:31:41.248 iops : min= 4, max= 64, avg=36.80, stdev=14.89, samples=20 00:31:41.248 lat (msec) : 250=9.38%, 500=58.33%, 750=30.73%, 1000=1.56% 00:31:41.248 cpu : usr=98.58%, sys=0.88%, ctx=35, majf=0, minf=37 00:31:41.248 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:31:41.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.248 filename1: (groupid=0, jobs=1): err= 0: pid=1345519: Sun Nov 17 19:40:37 2024 00:31:41.248 read: IOPS=50, BW=203KiB/s (208kB/s)(2072KiB/10213msec) 00:31:41.248 slat (usec): min=3, max=147, avg=49.30, stdev=39.05 00:31:41.248 clat (msec): min=8, max=692, avg=313.85, stdev=133.95 00:31:41.248 lat (msec): min=8, max=692, avg=313.89, stdev=133.96 00:31:41.248 clat percentiles (msec): 00:31:41.248 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 138], 20.00th=[ 220], 00:31:41.248 | 30.00th=[ 284], 40.00th=[ 292], 50.00th=[ 351], 60.00th=[ 376], 00:31:41.248 | 70.00th=[ 397], 80.00th=[ 405], 90.00th=[ 439], 95.00th=[ 481], 00:31:41.248 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 693], 99.95th=[ 693], 00:31:41.248 | 99.99th=[ 693] 00:31:41.248 bw ( KiB/s): min= 128, max= 640, per=4.41%, avg=200.80, stdev=116.59, samples=20 00:31:41.248 iops : min= 32, max= 160, avg=50.20, stdev=29.15, samples=20 00:31:41.248 lat (msec) : 10=3.09%, 20=6.18%, 250=16.60%, 500=69.50%, 750=4.63% 00:31:41.248 cpu : usr=98.64%, sys=0.87%, ctx=17, majf=0, minf=48 00:31:41.248 IO depths : 1=2.1%, 2=6.9%, 4=20.7%, 8=59.8%, 16=10.4%, 32=0.0%, >=64=0.0% 00:31:41.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 issued rwts: total=518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.248 filename2: (groupid=0, jobs=1): err= 0: pid=1345520: Sun Nov 17 19:40:37 2024 00:31:41.248 read: IOPS=42, BW=169KiB/s (173kB/s)(1720KiB/10182msec) 00:31:41.248 slat (nsec): min=8231, max=66486, avg=16822.14, stdev=9329.61 00:31:41.248 clat (msec): min=174, max=613, avg=378.40, stdev=119.86 00:31:41.248 lat (msec): min=174, max=613, avg=378.42, stdev=119.85 00:31:41.248 clat percentiles (msec): 00:31:41.248 | 1.00th=[ 209], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 259], 00:31:41.248 | 30.00th=[ 296], 40.00th=[ 313], 50.00th=[ 355], 60.00th=[ 397], 00:31:41.248 | 70.00th=[ 409], 80.00th=[ 493], 90.00th=[ 575], 95.00th=[ 592], 00:31:41.248 | 99.00th=[ 617], 99.50th=[ 617], 99.90th=[ 617], 99.95th=[ 617], 00:31:41.248 | 99.99th=[ 617] 00:31:41.248 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=165.60, stdev=57.40, samples=20 00:31:41.248 iops : min= 32, max= 64, avg=41.40, stdev=14.35, samples=20 00:31:41.248 lat (msec) : 250=15.35%, 500=65.12%, 750=19.53% 00:31:41.248 cpu : usr=98.67%, sys=0.85%, ctx=29, majf=0, minf=46 00:31:41.248 IO depths : 1=4.0%, 2=10.0%, 4=24.4%, 8=53.3%, 16=8.4%, 32=0.0%, >=64=0.0% 00:31:41.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 issued rwts: total=430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.248 filename2: (groupid=0, jobs=1): err= 0: pid=1345521: Sun Nov 17 19:40:37 2024 00:31:41.248 read: IOPS=46, BW=187KiB/s (191kB/s)(1904KiB/10186msec) 00:31:41.248 slat (usec): min=8, max=155, avg=29.33, stdev=28.23 00:31:41.248 clat (msec): min=141, max=748, avg=340.18, stdev=105.42 00:31:41.248 lat (msec): min=141, max=748, avg=340.21, stdev=105.43 00:31:41.248 clat percentiles (msec): 00:31:41.248 | 1.00th=[ 142], 5.00th=[ 165], 10.00th=[ 184], 20.00th=[ 234], 00:31:41.248 | 30.00th=[ 284], 40.00th=[ 305], 50.00th=[ 355], 60.00th=[ 384], 00:31:41.248 | 70.00th=[ 397], 80.00th=[ 401], 90.00th=[ 472], 95.00th=[ 527], 00:31:41.248 | 99.00th=[ 575], 99.50th=[ 575], 99.90th=[ 751], 99.95th=[ 751], 00:31:41.248 | 99.99th=[ 751] 00:31:41.248 bw ( KiB/s): min= 128, max= 384, per=4.26%, avg=193.68, stdev=66.80, samples=19 00:31:41.248 iops : min= 32, max= 96, avg=48.42, stdev=16.70, samples=19 00:31:41.248 lat (msec) : 250=21.01%, 500=70.17%, 750=8.82% 00:31:41.248 cpu : usr=98.59%, sys=0.87%, ctx=32, majf=0, minf=72 00:31:41.248 IO depths : 1=1.9%, 2=5.9%, 4=18.1%, 8=63.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:31:41.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 complete : 0=0.0%, 4=92.0%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.248 issued rwts: total=476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.248 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.248 filename2: (groupid=0, jobs=1): err= 0: pid=1345522: Sun Nov 17 19:40:37 2024 00:31:41.248 read: IOPS=57, BW=231KiB/s (237kB/s)(2360KiB/10210msec) 00:31:41.248 slat (nsec): min=7756, max=77906, avg=14759.21, stdev=10256.85 00:31:41.248 clat (msec): min=10, max=577, avg=274.57, stdev=110.74 00:31:41.248 lat (msec): min=10, max=577, avg=274.58, stdev=110.73 00:31:41.248 clat percentiles (msec): 00:31:41.248 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 138], 20.00th=[ 188], 00:31:41.248 | 30.00th=[ 215], 40.00th=[ 239], 50.00th=[ 268], 60.00th=[ 317], 00:31:41.248 | 70.00th=[ 359], 80.00th=[ 384], 90.00th=[ 401], 95.00th=[ 414], 00:31:41.248 | 99.00th=[ 567], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 575], 00:31:41.248 | 99.99th=[ 575] 00:31:41.248 bw ( KiB/s): min= 128, max= 512, per=5.06%, avg=229.60, stdev=100.56, samples=20 00:31:41.248 iops : min= 32, max= 128, avg=57.40, stdev=25.14, samples=20 00:31:41.248 lat (msec) : 20=5.42%, 250=38.31%, 500=54.92%, 750=1.36% 00:31:41.249 cpu : usr=98.88%, sys=0.72%, ctx=17, majf=0, minf=60 00:31:41.249 IO depths : 1=1.5%, 2=3.4%, 4=11.5%, 8=72.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:31:41.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 complete : 0=0.0%, 4=90.1%, 8=4.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.249 filename2: (groupid=0, jobs=1): err= 0: pid=1345523: Sun Nov 17 19:40:37 2024 00:31:41.249 read: IOPS=39, BW=158KiB/s (162kB/s)(1600KiB/10118msec) 00:31:41.249 slat (usec): min=6, max=171, avg=42.37, stdev=43.71 00:31:41.249 clat (msec): min=162, max=748, avg=404.38, stdev=132.73 00:31:41.249 lat (msec): min=162, max=748, avg=404.42, stdev=132.75 00:31:41.249 clat percentiles (msec): 00:31:41.249 | 1.00th=[ 184], 5.00th=[ 209], 10.00th=[ 247], 20.00th=[ 279], 00:31:41.249 | 30.00th=[ 296], 40.00th=[ 351], 50.00th=[ 384], 60.00th=[ 418], 00:31:41.249 | 70.00th=[ 542], 80.00th=[ 567], 90.00th=[ 584], 95.00th=[ 609], 00:31:41.249 | 99.00th=[ 634], 99.50th=[ 743], 99.90th=[ 751], 99.95th=[ 751], 00:31:41.249 | 99.99th=[ 751] 00:31:41.249 bw ( KiB/s): min= 16, max= 256, per=3.55%, avg=161.68, stdev=70.53, samples=19 00:31:41.249 iops : min= 4, max= 64, avg=40.42, stdev=17.63, samples=19 00:31:41.249 lat (msec) : 250=13.00%, 500=56.00%, 750=31.00% 00:31:41.249 cpu : usr=98.71%, sys=0.74%, ctx=26, majf=0, minf=36 00:31:41.249 IO depths : 1=3.2%, 2=9.2%, 4=24.2%, 8=54.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:41.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.249 filename2: (groupid=0, jobs=1): err= 0: pid=1345524: Sun Nov 17 19:40:37 2024 00:31:41.249 read: IOPS=45, BW=181KiB/s (186kB/s)(1848KiB/10183msec) 00:31:41.249 slat (usec): min=6, max=148, avg=50.61, stdev=41.06 00:31:41.249 clat (msec): min=178, max=543, avg=351.78, stdev=77.79 00:31:41.249 lat (msec): min=178, max=543, avg=351.83, stdev=77.79 00:31:41.249 clat percentiles (msec): 00:31:41.249 | 1.00th=[ 213], 5.00th=[ 218], 10.00th=[ 234], 20.00th=[ 284], 00:31:41.249 | 30.00th=[ 292], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 388], 00:31:41.249 | 70.00th=[ 401], 80.00th=[ 405], 90.00th=[ 439], 95.00th=[ 464], 00:31:41.249 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:31:41.249 | 99.99th=[ 542] 00:31:41.249 bw ( KiB/s): min= 128, max= 384, per=3.93%, avg=178.40, stdev=73.28, samples=20 00:31:41.249 iops : min= 32, max= 96, avg=44.60, stdev=18.32, samples=20 00:31:41.249 lat (msec) : 250=13.42%, 500=82.25%, 750=4.33% 00:31:41.249 cpu : usr=98.46%, sys=0.95%, ctx=49, majf=0, minf=56 00:31:41.249 IO depths : 1=3.7%, 2=10.0%, 4=25.1%, 8=52.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:31:41.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.249 filename2: (groupid=0, jobs=1): err= 0: pid=1345525: Sun Nov 17 19:40:37 2024 00:31:41.249 read: IOPS=53, BW=213KiB/s (218kB/s)(2176KiB/10213msec) 00:31:41.249 slat (usec): min=3, max=170, avg=38.67, stdev=36.16 00:31:41.249 clat (msec): min=3, max=727, avg=298.52, stdev=126.83 00:31:41.249 lat (msec): min=3, max=727, avg=298.56, stdev=126.84 00:31:41.249 clat percentiles (msec): 00:31:41.249 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 138], 20.00th=[ 220], 00:31:41.249 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 342], 00:31:41.249 | 70.00th=[ 372], 80.00th=[ 393], 90.00th=[ 405], 95.00th=[ 527], 00:31:41.249 | 99.00th=[ 575], 99.50th=[ 575], 99.90th=[ 726], 99.95th=[ 726], 00:31:41.249 | 99.99th=[ 726] 00:31:41.249 bw ( KiB/s): min= 112, max= 641, per=4.66%, avg=211.25, stdev=118.18, samples=20 00:31:41.249 iops : min= 28, max= 160, avg=52.80, stdev=29.50, samples=20 00:31:41.249 lat (msec) : 4=0.37%, 10=2.57%, 20=5.88%, 250=17.65%, 500=68.01% 00:31:41.249 lat (msec) : 750=5.51% 00:31:41.249 cpu : usr=98.61%, sys=0.90%, ctx=21, majf=0, minf=67 00:31:41.249 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:31:41.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.249 filename2: (groupid=0, jobs=1): err= 0: pid=1345526: Sun Nov 17 19:40:37 2024 00:31:41.249 read: IOPS=37, BW=151KiB/s (155kB/s)(1536KiB/10179msec) 00:31:41.249 slat (nsec): min=3841, max=52128, avg=20603.79, stdev=6339.35 00:31:41.249 clat (msec): min=137, max=801, avg=423.83, stdev=139.19 00:31:41.249 lat (msec): min=137, max=801, avg=423.85, stdev=139.19 00:31:41.249 clat percentiles (msec): 00:31:41.249 | 1.00th=[ 165], 5.00th=[ 239], 10.00th=[ 257], 20.00th=[ 284], 00:31:41.249 | 30.00th=[ 309], 40.00th=[ 388], 50.00th=[ 397], 60.00th=[ 477], 00:31:41.249 | 70.00th=[ 531], 80.00th=[ 575], 90.00th=[ 592], 95.00th=[ 634], 00:31:41.249 | 99.00th=[ 793], 99.50th=[ 802], 99.90th=[ 802], 99.95th=[ 802], 00:31:41.249 | 99.99th=[ 802] 00:31:41.249 bw ( KiB/s): min= 16, max= 256, per=3.24%, avg=147.20, stdev=59.55, samples=20 00:31:41.249 iops : min= 4, max= 64, avg=36.80, stdev=14.89, samples=20 00:31:41.249 lat (msec) : 250=9.38%, 500=58.33%, 750=30.73%, 1000=1.56% 00:31:41.249 cpu : usr=98.17%, sys=1.15%, ctx=40, majf=0, minf=36 00:31:41.249 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:31:41.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.249 filename2: (groupid=0, jobs=1): err= 0: pid=1345527: Sun Nov 17 19:40:37 2024 00:31:41.249 read: IOPS=48, BW=195KiB/s (199kB/s)(1984KiB/10186msec) 00:31:41.249 slat (nsec): min=6369, max=35582, avg=13649.31, stdev=6361.99 00:31:41.249 clat (msec): min=74, max=414, avg=326.81, stdev=81.52 00:31:41.249 lat (msec): min=74, max=414, avg=326.82, stdev=81.52 00:31:41.249 clat percentiles (msec): 00:31:41.249 | 1.00th=[ 75], 5.00th=[ 136], 10.00th=[ 218], 20.00th=[ 275], 00:31:41.249 | 30.00th=[ 305], 40.00th=[ 334], 50.00th=[ 359], 60.00th=[ 372], 00:31:41.249 | 70.00th=[ 388], 80.00th=[ 397], 90.00th=[ 401], 95.00th=[ 414], 00:31:41.249 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:31:41.249 | 99.99th=[ 414] 00:31:41.249 bw ( KiB/s): min= 128, max= 368, per=4.22%, avg=192.00, stdev=70.03, samples=20 00:31:41.249 iops : min= 32, max= 92, avg=48.00, stdev=17.51, samples=20 00:31:41.249 lat (msec) : 100=2.82%, 250=13.31%, 500=83.87% 00:31:41.249 cpu : usr=98.84%, sys=0.70%, ctx=26, majf=0, minf=40 00:31:41.249 IO depths : 1=0.6%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:41.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.249 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.249 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.249 00:31:41.249 Run status group 0 (all jobs): 00:31:41.249 READ: bw=4530KiB/s (4639kB/s), 151KiB/s-249KiB/s (155kB/s-255kB/s), io=45.2MiB (47.4MB), run=10114-10216msec 00:31:41.249 19:40:37 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:41.249 19:40:37 -- target/dif.sh@43 -- # local sub 00:31:41.249 19:40:37 -- target/dif.sh@45 -- # for sub in "$@" 00:31:41.249 19:40:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:41.249 19:40:37 -- target/dif.sh@36 -- # local sub_id=0 00:31:41.249 19:40:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:41.249 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.249 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.249 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.249 19:40:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:41.249 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.249 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.249 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.249 19:40:37 -- target/dif.sh@45 -- # for sub in "$@" 00:31:41.249 19:40:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:41.249 19:40:37 -- target/dif.sh@36 -- # local sub_id=1 00:31:41.249 19:40:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:41.249 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.249 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.249 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.249 19:40:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:41.249 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.249 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.249 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.249 19:40:37 -- target/dif.sh@45 -- # for sub in "$@" 00:31:41.249 19:40:37 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:41.249 19:40:37 -- target/dif.sh@36 -- # local sub_id=2 00:31:41.249 19:40:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:41.249 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.249 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.249 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.249 19:40:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:41.249 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.249 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.249 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.249 19:40:37 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:41.249 19:40:37 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:41.249 19:40:37 -- target/dif.sh@115 -- # numjobs=2 00:31:41.249 19:40:37 -- target/dif.sh@115 -- # iodepth=8 00:31:41.249 19:40:37 -- target/dif.sh@115 -- # runtime=5 00:31:41.249 19:40:37 -- target/dif.sh@115 -- # files=1 00:31:41.249 19:40:37 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:41.249 19:40:37 -- target/dif.sh@28 -- # local sub 00:31:41.249 19:40:37 -- target/dif.sh@30 -- # for sub in "$@" 00:31:41.249 19:40:37 -- target/dif.sh@31 -- # create_subsystem 0 00:31:41.249 19:40:37 -- target/dif.sh@18 -- # local sub_id=0 00:31:41.250 19:40:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:41.250 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.250 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.250 bdev_null0 00:31:41.250 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.250 19:40:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:41.250 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.250 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.250 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.250 19:40:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:41.250 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.250 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.250 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.250 19:40:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:41.250 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.250 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.250 [2024-11-17 19:40:37.673442] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.250 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.250 19:40:37 -- target/dif.sh@30 -- # for sub in "$@" 00:31:41.250 19:40:37 -- target/dif.sh@31 -- # create_subsystem 1 00:31:41.250 19:40:37 -- target/dif.sh@18 -- # local sub_id=1 00:31:41.250 19:40:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:41.250 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.250 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.250 bdev_null1 00:31:41.250 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.250 19:40:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:41.250 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.250 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.250 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.250 19:40:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:41.250 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.250 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.250 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.250 19:40:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.250 19:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.250 19:40:37 -- common/autotest_common.sh@10 -- # set +x 00:31:41.250 19:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.250 19:40:37 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:41.250 19:40:37 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:41.250 19:40:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:41.250 19:40:37 -- nvmf/common.sh@520 -- # config=() 00:31:41.250 19:40:37 -- nvmf/common.sh@520 -- # local subsystem config 00:31:41.250 19:40:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:41.250 19:40:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.250 19:40:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:41.250 { 00:31:41.250 "params": { 00:31:41.250 "name": "Nvme$subsystem", 00:31:41.250 "trtype": "$TEST_TRANSPORT", 00:31:41.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.250 "adrfam": "ipv4", 00:31:41.250 "trsvcid": "$NVMF_PORT", 00:31:41.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.250 "hdgst": ${hdgst:-false}, 00:31:41.250 "ddgst": ${ddgst:-false} 00:31:41.250 }, 00:31:41.250 "method": "bdev_nvme_attach_controller" 00:31:41.250 } 00:31:41.250 EOF 00:31:41.250 )") 00:31:41.250 19:40:37 -- target/dif.sh@82 -- # gen_fio_conf 00:31:41.250 19:40:37 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.250 19:40:37 -- target/dif.sh@54 -- # local file 00:31:41.250 19:40:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:31:41.250 19:40:37 -- target/dif.sh@56 -- # cat 00:31:41.250 19:40:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:41.250 19:40:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:31:41.250 19:40:37 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:41.250 19:40:37 -- common/autotest_common.sh@1330 -- # shift 00:31:41.250 19:40:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:31:41.250 19:40:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.250 19:40:37 -- nvmf/common.sh@542 -- # cat 00:31:41.250 19:40:37 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:41.250 19:40:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:41.250 19:40:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:31:41.250 19:40:37 -- target/dif.sh@72 -- # (( file <= files )) 00:31:41.250 19:40:37 -- target/dif.sh@73 -- # cat 00:31:41.250 19:40:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:41.250 19:40:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:41.250 19:40:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:41.250 { 00:31:41.250 "params": { 00:31:41.250 "name": "Nvme$subsystem", 00:31:41.250 "trtype": "$TEST_TRANSPORT", 00:31:41.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.250 "adrfam": "ipv4", 00:31:41.250 "trsvcid": "$NVMF_PORT", 00:31:41.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.250 "hdgst": ${hdgst:-false}, 00:31:41.250 "ddgst": ${ddgst:-false} 00:31:41.250 }, 00:31:41.250 "method": "bdev_nvme_attach_controller" 00:31:41.250 } 00:31:41.250 EOF 00:31:41.250 )") 00:31:41.250 19:40:37 -- target/dif.sh@72 -- # (( file++ )) 00:31:41.250 19:40:37 -- nvmf/common.sh@542 -- # cat 00:31:41.250 19:40:37 -- target/dif.sh@72 -- # (( file <= files )) 00:31:41.250 19:40:37 -- nvmf/common.sh@544 -- # jq . 00:31:41.250 19:40:37 -- nvmf/common.sh@545 -- # IFS=, 00:31:41.250 19:40:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:41.250 "params": { 00:31:41.250 "name": "Nvme0", 00:31:41.250 "trtype": "tcp", 00:31:41.250 "traddr": "10.0.0.2", 00:31:41.250 "adrfam": "ipv4", 00:31:41.250 "trsvcid": "4420", 00:31:41.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:41.250 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:41.250 "hdgst": false, 00:31:41.250 "ddgst": false 00:31:41.250 }, 00:31:41.250 "method": "bdev_nvme_attach_controller" 00:31:41.250 },{ 00:31:41.250 "params": { 00:31:41.250 "name": "Nvme1", 00:31:41.250 "trtype": "tcp", 00:31:41.250 "traddr": "10.0.0.2", 00:31:41.250 "adrfam": "ipv4", 00:31:41.250 "trsvcid": "4420", 00:31:41.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:41.250 "hdgst": false, 00:31:41.250 "ddgst": false 00:31:41.250 }, 00:31:41.250 "method": "bdev_nvme_attach_controller" 00:31:41.250 }' 00:31:41.250 19:40:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:41.250 19:40:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:41.250 19:40:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.250 19:40:37 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:41.250 19:40:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:31:41.250 19:40:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:41.250 19:40:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:41.250 19:40:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:41.250 19:40:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:41.250 19:40:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.250 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:41.250 ... 00:31:41.250 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:41.250 ... 00:31:41.250 fio-3.35 00:31:41.250 Starting 4 threads 00:31:41.250 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.250 [2024-11-17 19:40:38.654716] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:41.250 [2024-11-17 19:40:38.654778] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:46.516 00:31:46.516 filename0: (groupid=0, jobs=1): err= 0: pid=1346947: Sun Nov 17 19:40:43 2024 00:31:46.516 read: IOPS=1935, BW=15.1MiB/s (15.9MB/s)(75.6MiB/5002msec) 00:31:46.516 slat (nsec): min=3773, max=72375, avg=17273.07, stdev=10298.29 00:31:46.516 clat (usec): min=807, max=8497, avg=4073.22, stdev=520.09 00:31:46.516 lat (usec): min=820, max=8508, avg=4090.49, stdev=520.73 00:31:46.516 clat percentiles (usec): 00:31:46.516 | 1.00th=[ 2606], 5.00th=[ 3261], 10.00th=[ 3589], 20.00th=[ 3818], 00:31:46.516 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:46.516 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4752], 00:31:46.516 | 99.00th=[ 5866], 99.50th=[ 6390], 99.90th=[ 7439], 99.95th=[ 7570], 00:31:46.516 | 99.99th=[ 8455] 00:31:46.516 bw ( KiB/s): min=14749, max=15952, per=25.11%, avg=15481.30, stdev=419.19, samples=10 00:31:46.516 iops : min= 1843, max= 1994, avg=1935.10, stdev=52.52, samples=10 00:31:46.516 lat (usec) : 1000=0.06% 00:31:46.516 lat (msec) : 2=0.31%, 4=37.52%, 10=62.10% 00:31:46.516 cpu : usr=94.74%, sys=4.72%, ctx=11, majf=0, minf=142 00:31:46.516 IO depths : 1=0.4%, 2=12.5%, 4=59.8%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.516 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.516 issued rwts: total=9682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.516 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:46.516 filename0: (groupid=0, jobs=1): err= 0: pid=1346948: Sun Nov 17 19:40:43 2024 00:31:46.516 read: IOPS=1936, BW=15.1MiB/s (15.9MB/s)(75.7MiB/5001msec) 00:31:46.516 slat (nsec): min=3823, max=77361, avg=18681.85, stdev=9709.23 00:31:46.516 clat (usec): min=749, max=7564, avg=4067.00, stdev=514.91 00:31:46.516 lat (usec): min=770, max=7575, avg=4085.68, stdev=515.67 00:31:46.516 clat percentiles (usec): 00:31:46.516 | 1.00th=[ 2638], 5.00th=[ 3228], 10.00th=[ 3556], 20.00th=[ 3851], 00:31:46.517 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:46.517 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4752], 00:31:46.517 | 99.00th=[ 5997], 99.50th=[ 6325], 99.90th=[ 7046], 99.95th=[ 7308], 00:31:46.517 | 99.99th=[ 7570] 00:31:46.517 bw ( KiB/s): min=14749, max=16112, per=25.00%, avg=15416.56, stdev=455.22, samples=9 00:31:46.517 iops : min= 1843, max= 2014, avg=1927.00, stdev=57.02, samples=9 00:31:46.517 lat (usec) : 750=0.01%, 1000=0.02% 00:31:46.517 lat (msec) : 2=0.33%, 4=37.51%, 10=62.13% 00:31:46.517 cpu : usr=96.12%, sys=3.40%, ctx=10, majf=0, minf=92 00:31:46.517 IO depths : 1=0.2%, 2=14.6%, 4=57.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.517 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.517 issued rwts: total=9685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.517 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:46.517 filename1: (groupid=0, jobs=1): err= 0: pid=1346949: Sun Nov 17 19:40:43 2024 00:31:46.517 read: IOPS=1922, BW=15.0MiB/s (15.7MB/s)(75.1MiB/5002msec) 00:31:46.517 slat (nsec): min=3762, max=72262, avg=19392.54, stdev=10637.36 00:31:46.517 clat (usec): min=773, max=10161, avg=4093.47, stdev=544.73 00:31:46.517 lat (usec): min=785, max=10173, avg=4112.86, stdev=544.96 00:31:46.517 clat percentiles (usec): 00:31:46.517 | 1.00th=[ 2507], 5.00th=[ 3326], 10.00th=[ 3621], 20.00th=[ 3851], 00:31:46.517 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:46.517 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4883], 00:31:46.517 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7242], 99.95th=[ 7635], 00:31:46.517 | 99.99th=[10159] 00:31:46.517 bw ( KiB/s): min=14592, max=15968, per=24.88%, avg=15342.22, stdev=476.48, samples=9 00:31:46.517 iops : min= 1824, max= 1996, avg=1917.78, stdev=59.56, samples=9 00:31:46.517 lat (usec) : 1000=0.08% 00:31:46.517 lat (msec) : 2=0.51%, 4=37.13%, 10=62.26%, 20=0.01% 00:31:46.517 cpu : usr=95.00%, sys=4.50%, ctx=6, majf=0, minf=140 00:31:46.517 IO depths : 1=0.1%, 2=15.0%, 4=57.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.517 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.517 issued rwts: total=9614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.517 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:46.517 filename1: (groupid=0, jobs=1): err= 0: pid=1346950: Sun Nov 17 19:40:43 2024 00:31:46.517 read: IOPS=1915, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5003msec) 00:31:46.517 slat (nsec): min=3701, max=99022, avg=20385.91, stdev=10636.95 00:31:46.517 clat (usec): min=696, max=11411, avg=4104.05, stdev=541.57 00:31:46.517 lat (usec): min=715, max=11431, avg=4124.44, stdev=541.72 00:31:46.517 clat percentiles (usec): 00:31:46.517 | 1.00th=[ 2540], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3884], 00:31:46.517 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:46.517 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4883], 00:31:46.517 | 99.00th=[ 5997], 99.50th=[ 6652], 99.90th=[ 7701], 99.95th=[ 8848], 00:31:46.517 | 99.99th=[11469] 00:31:46.517 bw ( KiB/s): min=14656, max=16000, per=24.84%, avg=15316.80, stdev=411.60, samples=10 00:31:46.517 iops : min= 1832, max= 2000, avg=1914.60, stdev=51.45, samples=10 00:31:46.517 lat (usec) : 750=0.01%, 1000=0.17% 00:31:46.517 lat (msec) : 2=0.46%, 4=38.39%, 10=60.96%, 20=0.01% 00:31:46.517 cpu : usr=88.52%, sys=7.12%, ctx=289, majf=0, minf=91 00:31:46.517 IO depths : 1=0.3%, 2=15.6%, 4=57.5%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.517 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.517 issued rwts: total=9581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.517 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:46.517 00:31:46.517 Run status group 0 (all jobs): 00:31:46.517 READ: bw=60.2MiB/s (63.1MB/s), 15.0MiB/s-15.1MiB/s (15.7MB/s-15.9MB/s), io=301MiB (316MB), run=5001-5003msec 00:31:46.517 19:40:44 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:46.517 19:40:44 -- target/dif.sh@43 -- # local sub 00:31:46.517 19:40:44 -- target/dif.sh@45 -- # for sub in "$@" 00:31:46.517 19:40:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:46.517 19:40:44 -- target/dif.sh@36 -- # local sub_id=0 00:31:46.517 19:40:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:46.517 19:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.517 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.517 19:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.517 19:40:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:46.517 19:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.517 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.517 19:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.517 19:40:44 -- target/dif.sh@45 -- # for sub in "$@" 00:31:46.517 19:40:44 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:46.517 19:40:44 -- target/dif.sh@36 -- # local sub_id=1 00:31:46.517 19:40:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.517 19:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.517 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.517 19:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.517 19:40:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:46.517 19:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.517 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.517 19:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.517 00:31:46.517 real 0m24.534s 00:31:46.517 user 4m38.397s 00:31:46.517 sys 0m5.104s 00:31:46.517 19:40:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:46.517 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.517 ************************************ 00:31:46.517 END TEST fio_dif_rand_params 00:31:46.517 ************************************ 00:31:46.517 19:40:44 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:46.517 19:40:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:46.517 19:40:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:46.517 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.517 ************************************ 00:31:46.517 START TEST fio_dif_digest 00:31:46.517 ************************************ 00:31:46.517 19:40:44 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:31:46.517 19:40:44 -- target/dif.sh@123 -- # local NULL_DIF 00:31:46.517 19:40:44 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:46.517 19:40:44 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:46.517 19:40:44 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:46.517 19:40:44 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:46.517 19:40:44 -- target/dif.sh@127 -- # numjobs=3 00:31:46.517 19:40:44 -- target/dif.sh@127 -- # iodepth=3 00:31:46.517 19:40:44 -- target/dif.sh@127 -- # runtime=10 00:31:46.517 19:40:44 -- target/dif.sh@128 -- # hdgst=true 00:31:46.517 19:40:44 -- target/dif.sh@128 -- # ddgst=true 00:31:46.517 19:40:44 -- target/dif.sh@130 -- # create_subsystems 0 00:31:46.517 19:40:44 -- target/dif.sh@28 -- # local sub 00:31:46.517 19:40:44 -- target/dif.sh@30 -- # for sub in "$@" 00:31:46.517 19:40:44 -- target/dif.sh@31 -- # create_subsystem 0 00:31:46.517 19:40:44 -- target/dif.sh@18 -- # local sub_id=0 00:31:46.517 19:40:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:46.517 19:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.517 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.517 bdev_null0 00:31:46.517 19:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.517 19:40:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:46.517 19:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.517 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.517 19:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.517 19:40:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:46.517 19:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.517 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.517 19:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.517 19:40:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:46.518 19:40:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.518 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:31:46.518 [2024-11-17 19:40:44.167237] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.518 19:40:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.518 19:40:44 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:46.518 19:40:44 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:46.518 19:40:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:46.518 19:40:44 -- nvmf/common.sh@520 -- # config=() 00:31:46.518 19:40:44 -- nvmf/common.sh@520 -- # local subsystem config 00:31:46.518 19:40:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:46.518 19:40:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.518 19:40:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:46.518 { 00:31:46.518 "params": { 00:31:46.518 "name": "Nvme$subsystem", 00:31:46.518 "trtype": "$TEST_TRANSPORT", 00:31:46.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:46.518 "adrfam": "ipv4", 00:31:46.518 "trsvcid": "$NVMF_PORT", 00:31:46.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:46.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:46.518 "hdgst": ${hdgst:-false}, 00:31:46.518 "ddgst": ${ddgst:-false} 00:31:46.518 }, 00:31:46.518 "method": "bdev_nvme_attach_controller" 00:31:46.518 } 00:31:46.518 EOF 00:31:46.518 )") 00:31:46.518 19:40:44 -- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.518 19:40:44 -- target/dif.sh@82 -- # gen_fio_conf 00:31:46.518 19:40:44 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:31:46.518 19:40:44 -- target/dif.sh@54 -- # local file 00:31:46.518 19:40:44 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:46.518 19:40:44 -- target/dif.sh@56 -- # cat 00:31:46.518 19:40:44 -- common/autotest_common.sh@1328 -- # local sanitizers 00:31:46.518 19:40:44 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.518 19:40:44 -- common/autotest_common.sh@1330 -- # shift 00:31:46.518 19:40:44 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:31:46.518 19:40:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.518 19:40:44 -- nvmf/common.sh@542 -- # cat 00:31:46.518 19:40:44 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.518 19:40:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:46.518 19:40:44 -- target/dif.sh@72 -- # (( file <= files )) 00:31:46.518 19:40:44 -- common/autotest_common.sh@1334 -- # grep libasan 00:31:46.518 19:40:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:46.518 19:40:44 -- nvmf/common.sh@544 -- # jq . 00:31:46.518 19:40:44 -- nvmf/common.sh@545 -- # IFS=, 00:31:46.518 19:40:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:46.518 "params": { 00:31:46.518 "name": "Nvme0", 00:31:46.518 "trtype": "tcp", 00:31:46.518 "traddr": "10.0.0.2", 00:31:46.518 "adrfam": "ipv4", 00:31:46.518 "trsvcid": "4420", 00:31:46.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:46.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:46.518 "hdgst": true, 00:31:46.518 "ddgst": true 00:31:46.518 }, 00:31:46.518 "method": "bdev_nvme_attach_controller" 00:31:46.518 }' 00:31:46.518 19:40:44 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:46.518 19:40:44 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:46.518 19:40:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.518 19:40:44 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.518 19:40:44 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:31:46.518 19:40:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:31:46.518 19:40:44 -- common/autotest_common.sh@1334 -- # asan_lib= 00:31:46.518 19:40:44 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:31:46.518 19:40:44 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:46.518 19:40:44 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.518 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:46.518 ... 00:31:46.518 fio-3.35 00:31:46.518 Starting 3 threads 00:31:46.518 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.776 [2024-11-17 19:40:44.784349] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:46.776 [2024-11-17 19:40:44.784420] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:56.747 00:31:56.747 filename0: (groupid=0, jobs=1): err= 0: pid=1347846: Sun Nov 17 19:40:54 2024 00:31:56.747 read: IOPS=213, BW=26.7MiB/s (27.9MB/s)(268MiB/10046msec) 00:31:56.747 slat (nsec): min=4500, max=41470, avg=18056.57, stdev=3486.74 00:31:56.747 clat (usec): min=10794, max=53000, avg=14028.54, stdev=1515.21 00:31:56.747 lat (usec): min=10812, max=53017, avg=14046.60, stdev=1515.22 00:31:56.747 clat percentiles (usec): 00:31:56.747 | 1.00th=[11731], 5.00th=[12518], 10.00th=[12780], 20.00th=[13173], 00:31:56.747 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:31:56.747 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:31:56.747 | 99.00th=[16712], 99.50th=[17433], 99.90th=[21627], 99.95th=[49021], 00:31:56.747 | 99.99th=[53216] 00:31:56.747 bw ( KiB/s): min=26368, max=28160, per=34.11%, avg=27392.00, stdev=462.44, samples=20 00:31:56.747 iops : min= 206, max= 220, avg=214.00, stdev= 3.61, samples=20 00:31:56.747 lat (msec) : 20=99.77%, 50=0.19%, 100=0.05% 00:31:56.747 cpu : usr=95.41%, sys=4.10%, ctx=20, majf=0, minf=78 00:31:56.747 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.747 issued rwts: total=2142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.747 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:56.747 filename0: (groupid=0, jobs=1): err= 0: pid=1347847: Sun Nov 17 19:40:54 2024 00:31:56.747 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(261MiB/10047msec) 00:31:56.747 slat (nsec): min=8824, max=79584, avg=17264.73, stdev=4098.62 00:31:56.747 clat (usec): min=11250, max=50709, avg=14408.33, stdev=1451.06 00:31:56.747 lat (usec): min=11268, max=50728, avg=14425.59, stdev=1451.27 00:31:56.747 clat percentiles (usec): 00:31:56.747 | 1.00th=[12256], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:31:56.747 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:31:56.747 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[15926], 00:31:56.747 | 99.00th=[16909], 99.50th=[17433], 99.90th=[22414], 99.95th=[47973], 00:31:56.747 | 99.99th=[50594] 00:31:56.747 bw ( KiB/s): min=25394, max=27392, per=33.20%, avg=26664.90, stdev=500.33, samples=20 00:31:56.747 iops : min= 198, max= 214, avg=208.30, stdev= 3.96, samples=20 00:31:56.747 lat (msec) : 20=99.86%, 50=0.10%, 100=0.05% 00:31:56.747 cpu : usr=96.25%, sys=3.26%, ctx=31, majf=0, minf=132 00:31:56.747 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.747 issued rwts: total=2086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.747 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:56.747 filename0: (groupid=0, jobs=1): err= 0: pid=1347848: Sun Nov 17 19:40:54 2024 00:31:56.747 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(260MiB/10045msec) 00:31:56.747 slat (nsec): min=8562, max=61577, avg=16738.42, stdev=4022.79 00:31:56.747 clat (usec): min=10985, max=48246, avg=14476.21, stdev=1378.57 00:31:56.747 lat (usec): min=11003, max=48260, avg=14492.95, stdev=1378.65 00:31:56.747 clat percentiles (usec): 00:31:56.747 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:31:56.747 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:31:56.747 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[16057], 00:31:56.747 | 99.00th=[16909], 99.50th=[17171], 99.90th=[20317], 99.95th=[45876], 00:31:56.747 | 99.99th=[48497] 00:31:56.747 bw ( KiB/s): min=25600, max=27136, per=33.05%, avg=26547.20, stdev=322.75, samples=20 00:31:56.747 iops : min= 200, max= 212, avg=207.40, stdev= 2.52, samples=20 00:31:56.747 lat (msec) : 20=99.86%, 50=0.14% 00:31:56.747 cpu : usr=96.04%, sys=3.47%, ctx=21, majf=0, minf=186 00:31:56.747 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.747 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.747 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:56.747 00:31:56.747 Run status group 0 (all jobs): 00:31:56.747 READ: bw=78.4MiB/s (82.2MB/s), 25.8MiB/s-26.7MiB/s (27.1MB/s-27.9MB/s), io=788MiB (826MB), run=10045-10047msec 00:31:57.008 19:40:55 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:57.008 19:40:55 -- target/dif.sh@43 -- # local sub 00:31:57.008 19:40:55 -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.008 19:40:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:57.008 19:40:55 -- target/dif.sh@36 -- # local sub_id=0 00:31:57.008 19:40:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.008 19:40:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.008 19:40:55 -- common/autotest_common.sh@10 -- # set +x 00:31:57.008 19:40:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.008 19:40:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:57.008 19:40:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.008 19:40:55 -- common/autotest_common.sh@10 -- # set +x 00:31:57.008 19:40:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.008 00:31:57.008 real 0m11.004s 00:31:57.008 user 0m29.879s 00:31:57.008 sys 0m1.368s 00:31:57.008 19:40:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:31:57.008 19:40:55 -- common/autotest_common.sh@10 -- # set +x 00:31:57.008 ************************************ 00:31:57.008 END TEST fio_dif_digest 00:31:57.008 ************************************ 00:31:57.008 19:40:55 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:57.008 19:40:55 -- target/dif.sh@147 -- # nvmftestfini 00:31:57.008 19:40:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:57.008 19:40:55 -- nvmf/common.sh@116 -- # sync 00:31:57.008 19:40:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:57.008 19:40:55 -- nvmf/common.sh@119 -- # set +e 00:31:57.008 19:40:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:57.008 19:40:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:57.008 rmmod nvme_tcp 00:31:57.008 rmmod nvme_fabrics 00:31:57.008 rmmod nvme_keyring 00:31:57.008 19:40:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:57.008 19:40:55 -- nvmf/common.sh@123 -- # set -e 00:31:57.008 19:40:55 -- nvmf/common.sh@124 -- # return 0 00:31:57.008 19:40:55 -- nvmf/common.sh@477 -- # '[' -n 1341491 ']' 00:31:57.008 19:40:55 -- nvmf/common.sh@478 -- # killprocess 1341491 00:31:57.008 19:40:55 -- common/autotest_common.sh@936 -- # '[' -z 1341491 ']' 00:31:57.008 19:40:55 -- common/autotest_common.sh@940 -- # kill -0 1341491 00:31:57.008 19:40:55 -- common/autotest_common.sh@941 -- # uname 00:31:57.008 19:40:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:57.008 19:40:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1341491 00:31:57.008 19:40:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:57.008 19:40:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:57.008 19:40:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1341491' 00:31:57.008 killing process with pid 1341491 00:31:57.008 19:40:55 -- common/autotest_common.sh@955 -- # kill 1341491 00:31:57.008 19:40:55 -- common/autotest_common.sh@960 -- # wait 1341491 00:31:57.286 19:40:55 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:57.286 19:40:55 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:58.232 Waiting for block devices as requested 00:31:58.490 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:58.490 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:58.490 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:58.748 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:58.748 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:58.748 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:59.006 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:59.006 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:59.006 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:59.006 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:59.266 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:59.266 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:59.266 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:59.266 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:59.527 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:59.527 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:59.527 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:59.787 19:40:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:59.787 19:40:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:59.787 19:40:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:59.787 19:40:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:59.787 19:40:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.787 19:40:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:59.787 19:40:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.688 19:40:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:01.688 00:32:01.688 real 1m7.294s 00:32:01.688 user 6m37.230s 00:32:01.688 sys 0m14.707s 00:32:01.688 19:40:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:01.688 19:40:59 -- common/autotest_common.sh@10 -- # set +x 00:32:01.688 ************************************ 00:32:01.688 END TEST nvmf_dif 00:32:01.688 ************************************ 00:32:01.688 19:40:59 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:01.688 19:40:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:01.688 19:40:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:01.688 19:40:59 -- common/autotest_common.sh@10 -- # set +x 00:32:01.688 ************************************ 00:32:01.688 START TEST nvmf_abort_qd_sizes 00:32:01.688 ************************************ 00:32:01.688 19:40:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:01.688 * Looking for test storage... 00:32:01.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:01.947 19:40:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:32:01.947 19:40:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:32:01.947 19:40:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:32:01.947 19:41:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:32:01.947 19:41:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:32:01.947 19:41:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:32:01.947 19:41:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:32:01.947 19:41:00 -- scripts/common.sh@335 -- # IFS=.-: 00:32:01.947 19:41:00 -- scripts/common.sh@335 -- # read -ra ver1 00:32:01.947 19:41:00 -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.947 19:41:00 -- scripts/common.sh@336 -- # read -ra ver2 00:32:01.947 19:41:00 -- scripts/common.sh@337 -- # local 'op=<' 00:32:01.947 19:41:00 -- scripts/common.sh@339 -- # ver1_l=2 00:32:01.947 19:41:00 -- scripts/common.sh@340 -- # ver2_l=1 00:32:01.947 19:41:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:32:01.947 19:41:00 -- scripts/common.sh@343 -- # case "$op" in 00:32:01.947 19:41:00 -- scripts/common.sh@344 -- # : 1 00:32:01.947 19:41:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:32:01.947 19:41:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.947 19:41:00 -- scripts/common.sh@364 -- # decimal 1 00:32:01.947 19:41:00 -- scripts/common.sh@352 -- # local d=1 00:32:01.947 19:41:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.947 19:41:00 -- scripts/common.sh@354 -- # echo 1 00:32:01.947 19:41:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:32:01.947 19:41:00 -- scripts/common.sh@365 -- # decimal 2 00:32:01.947 19:41:00 -- scripts/common.sh@352 -- # local d=2 00:32:01.947 19:41:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.947 19:41:00 -- scripts/common.sh@354 -- # echo 2 00:32:01.947 19:41:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:32:01.947 19:41:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:32:01.947 19:41:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:32:01.947 19:41:00 -- scripts/common.sh@367 -- # return 0 00:32:01.947 19:41:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.947 19:41:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:32:01.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.947 --rc genhtml_branch_coverage=1 00:32:01.947 --rc genhtml_function_coverage=1 00:32:01.947 --rc genhtml_legend=1 00:32:01.947 --rc geninfo_all_blocks=1 00:32:01.947 --rc geninfo_unexecuted_blocks=1 00:32:01.947 00:32:01.947 ' 00:32:01.947 19:41:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:32:01.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.947 --rc genhtml_branch_coverage=1 00:32:01.947 --rc genhtml_function_coverage=1 00:32:01.947 --rc genhtml_legend=1 00:32:01.947 --rc geninfo_all_blocks=1 00:32:01.947 --rc geninfo_unexecuted_blocks=1 00:32:01.947 00:32:01.947 ' 00:32:01.947 19:41:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:32:01.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.947 --rc genhtml_branch_coverage=1 00:32:01.947 --rc genhtml_function_coverage=1 00:32:01.947 --rc genhtml_legend=1 00:32:01.947 --rc geninfo_all_blocks=1 00:32:01.947 --rc geninfo_unexecuted_blocks=1 00:32:01.947 00:32:01.947 ' 00:32:01.947 19:41:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:32:01.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.947 --rc genhtml_branch_coverage=1 00:32:01.947 --rc genhtml_function_coverage=1 00:32:01.947 --rc genhtml_legend=1 00:32:01.947 --rc geninfo_all_blocks=1 00:32:01.947 --rc geninfo_unexecuted_blocks=1 00:32:01.947 00:32:01.947 ' 00:32:01.947 19:41:00 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.947 19:41:00 -- nvmf/common.sh@7 -- # uname -s 00:32:01.947 19:41:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.947 19:41:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.947 19:41:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.947 19:41:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.947 19:41:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.947 19:41:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.947 19:41:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.947 19:41:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.947 19:41:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.947 19:41:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.947 19:41:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:01.947 19:41:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:01.947 19:41:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.947 19:41:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.947 19:41:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.947 19:41:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.947 19:41:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.947 19:41:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.947 19:41:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.947 19:41:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.947 19:41:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.948 19:41:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.948 19:41:00 -- paths/export.sh@5 -- # export PATH 00:32:01.948 19:41:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.948 19:41:00 -- nvmf/common.sh@46 -- # : 0 00:32:01.948 19:41:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:01.948 19:41:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:01.948 19:41:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:01.948 19:41:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.948 19:41:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.948 19:41:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:01.948 19:41:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:01.948 19:41:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:01.948 19:41:00 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:32:01.948 19:41:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:01.948 19:41:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.948 19:41:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:01.948 19:41:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:01.948 19:41:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:01.948 19:41:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.948 19:41:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:01.948 19:41:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.948 19:41:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:01.948 19:41:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:01.948 19:41:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:01.948 19:41:00 -- common/autotest_common.sh@10 -- # set +x 00:32:03.853 19:41:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:03.853 19:41:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:03.853 19:41:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:03.853 19:41:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:03.853 19:41:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:03.853 19:41:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:03.853 19:41:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:03.853 19:41:01 -- nvmf/common.sh@294 -- # net_devs=() 00:32:03.853 19:41:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:03.853 19:41:01 -- nvmf/common.sh@295 -- # e810=() 00:32:03.853 19:41:01 -- nvmf/common.sh@295 -- # local -ga e810 00:32:03.853 19:41:01 -- nvmf/common.sh@296 -- # x722=() 00:32:03.853 19:41:01 -- nvmf/common.sh@296 -- # local -ga x722 00:32:03.853 19:41:01 -- nvmf/common.sh@297 -- # mlx=() 00:32:03.853 19:41:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:03.853 19:41:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.853 19:41:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:03.853 19:41:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:03.853 19:41:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:03.853 19:41:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:03.854 19:41:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:03.854 19:41:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:03.854 19:41:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:03.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:03.854 19:41:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:03.854 19:41:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:03.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:03.854 19:41:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:03.854 19:41:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:03.854 19:41:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.854 19:41:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:03.854 19:41:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.854 19:41:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:03.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:03.854 19:41:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.854 19:41:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:03.854 19:41:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.854 19:41:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:03.854 19:41:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.854 19:41:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:03.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:03.854 19:41:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.854 19:41:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:03.854 19:41:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:03.854 19:41:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:03.854 19:41:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:03.854 19:41:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.854 19:41:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.854 19:41:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.854 19:41:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:03.854 19:41:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.854 19:41:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.854 19:41:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:03.854 19:41:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.854 19:41:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.854 19:41:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:03.854 19:41:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:03.854 19:41:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.854 19:41:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.854 19:41:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.854 19:41:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.854 19:41:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:03.854 19:41:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.854 19:41:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.854 19:41:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.854 19:41:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:03.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:32:03.854 00:32:03.854 --- 10.0.0.2 ping statistics --- 00:32:03.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.854 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:32:03.854 19:41:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:32:03.854 00:32:03.854 --- 10.0.0.1 ping statistics --- 00:32:03.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.854 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:32:03.854 19:41:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.854 19:41:02 -- nvmf/common.sh@410 -- # return 0 00:32:03.854 19:41:02 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:03.854 19:41:02 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:05.232 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:05.232 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:05.232 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:05.232 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:05.232 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:05.232 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:05.232 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:05.232 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:05.232 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:05.232 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:05.232 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:05.232 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:05.232 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:05.232 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:05.232 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:05.232 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:06.169 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:06.169 19:41:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.169 19:41:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:06.169 19:41:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:06.169 19:41:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.169 19:41:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:06.169 19:41:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:06.169 19:41:04 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:32:06.169 19:41:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:06.169 19:41:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:06.169 19:41:04 -- common/autotest_common.sh@10 -- # set +x 00:32:06.169 19:41:04 -- nvmf/common.sh@469 -- # nvmfpid=1352746 00:32:06.169 19:41:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:06.169 19:41:04 -- nvmf/common.sh@470 -- # waitforlisten 1352746 00:32:06.169 19:41:04 -- common/autotest_common.sh@829 -- # '[' -z 1352746 ']' 00:32:06.169 19:41:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.169 19:41:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:06.169 19:41:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.169 19:41:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:06.169 19:41:04 -- common/autotest_common.sh@10 -- # set +x 00:32:06.169 [2024-11-17 19:41:04.382136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:32:06.169 [2024-11-17 19:41:04.382204] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.169 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.492 [2024-11-17 19:41:04.452312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:06.492 [2024-11-17 19:41:04.546601] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:06.492 [2024-11-17 19:41:04.546779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.492 [2024-11-17 19:41:04.546796] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.492 [2024-11-17 19:41:04.546809] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.492 [2024-11-17 19:41:04.546860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.492 [2024-11-17 19:41:04.546888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.492 [2024-11-17 19:41:04.546961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:06.492 [2024-11-17 19:41:04.546964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.450 19:41:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:07.450 19:41:05 -- common/autotest_common.sh@862 -- # return 0 00:32:07.450 19:41:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:07.450 19:41:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:07.450 19:41:05 -- common/autotest_common.sh@10 -- # set +x 00:32:07.450 19:41:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.450 19:41:05 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:07.450 19:41:05 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:07.450 19:41:05 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:07.450 19:41:05 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:07.450 19:41:05 -- scripts/common.sh@312 -- # local nvmes 00:32:07.450 19:41:05 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:32:07.450 19:41:05 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:07.450 19:41:05 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:07.450 19:41:05 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:07.450 19:41:05 -- scripts/common.sh@322 -- # uname -s 00:32:07.450 19:41:05 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:07.450 19:41:05 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:07.450 19:41:05 -- scripts/common.sh@327 -- # (( 1 )) 00:32:07.450 19:41:05 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:32:07.450 19:41:05 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:07.450 19:41:05 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:32:07.450 19:41:05 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:07.450 19:41:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:07.450 19:41:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:07.450 19:41:05 -- common/autotest_common.sh@10 -- # set +x 00:32:07.450 ************************************ 00:32:07.450 START TEST spdk_target_abort 00:32:07.450 ************************************ 00:32:07.450 19:41:05 -- common/autotest_common.sh@1114 -- # spdk_target 00:32:07.450 19:41:05 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:07.450 19:41:05 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:07.450 19:41:05 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:07.450 19:41:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.450 19:41:05 -- common/autotest_common.sh@10 -- # set +x 00:32:10.737 spdk_targetn1 00:32:10.737 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:10.737 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.737 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:32:10.737 [2024-11-17 19:41:08.273578] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.737 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:10.737 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.737 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:32:10.737 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:10.737 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.737 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:32:10.737 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:10.737 19:41:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.737 19:41:08 -- common/autotest_common.sh@10 -- # set +x 00:32:10.737 [2024-11-17 19:41:08.305880] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.737 19:41:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:10.737 19:41:08 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:10.737 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.282 Initializing NVMe Controllers 00:32:13.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:13.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:13.282 Initialization complete. Launching workers. 00:32:13.282 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 12376, failed: 0 00:32:13.282 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1229, failed to submit 11147 00:32:13.282 success 701, unsuccess 528, failed 0 00:32:13.282 19:41:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:13.282 19:41:11 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:13.282 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.571 [2024-11-17 19:41:14.603719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.571 [2024-11-17 19:41:14.603905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.572 [2024-11-17 19:41:14.603916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.572 [2024-11-17 19:41:14.603928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.572 [2024-11-17 19:41:14.603940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.572 [2024-11-17 19:41:14.603952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfb0 is same with the state(5) to be set 00:32:16.572 Initializing NVMe Controllers 00:32:16.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:16.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:16.572 Initialization complete. Launching workers. 00:32:16.572 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8645, failed: 0 00:32:16.572 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1237, failed to submit 7408 00:32:16.572 success 376, unsuccess 861, failed 0 00:32:16.572 19:41:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:16.572 19:41:14 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:16.572 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.862 Initializing NVMe Controllers 00:32:19.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:19.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:19.862 Initialization complete. Launching workers. 00:32:19.862 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30822, failed: 0 00:32:19.862 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2707, failed to submit 28115 00:32:19.862 success 530, unsuccess 2177, failed 0 00:32:19.862 19:41:17 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:19.862 19:41:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.862 19:41:17 -- common/autotest_common.sh@10 -- # set +x 00:32:19.862 19:41:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.862 19:41:17 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:19.862 19:41:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.862 19:41:17 -- common/autotest_common.sh@10 -- # set +x 00:32:21.240 19:41:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.240 19:41:19 -- target/abort_qd_sizes.sh@62 -- # killprocess 1352746 00:32:21.240 19:41:19 -- common/autotest_common.sh@936 -- # '[' -z 1352746 ']' 00:32:21.240 19:41:19 -- common/autotest_common.sh@940 -- # kill -0 1352746 00:32:21.240 19:41:19 -- common/autotest_common.sh@941 -- # uname 00:32:21.240 19:41:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:21.240 19:41:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1352746 00:32:21.240 19:41:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:21.240 19:41:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:21.240 19:41:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1352746' 00:32:21.240 killing process with pid 1352746 00:32:21.240 19:41:19 -- common/autotest_common.sh@955 -- # kill 1352746 00:32:21.240 19:41:19 -- common/autotest_common.sh@960 -- # wait 1352746 00:32:21.240 00:32:21.240 real 0m14.035s 00:32:21.240 user 0m56.121s 00:32:21.240 sys 0m2.464s 00:32:21.240 19:41:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:21.240 19:41:19 -- common/autotest_common.sh@10 -- # set +x 00:32:21.240 ************************************ 00:32:21.240 END TEST spdk_target_abort 00:32:21.240 ************************************ 00:32:21.240 19:41:19 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:21.240 19:41:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:21.240 19:41:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:21.240 19:41:19 -- common/autotest_common.sh@10 -- # set +x 00:32:21.240 ************************************ 00:32:21.240 START TEST kernel_target_abort 00:32:21.240 ************************************ 00:32:21.240 19:41:19 -- common/autotest_common.sh@1114 -- # kernel_target 00:32:21.240 19:41:19 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:21.240 19:41:19 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:21.240 19:41:19 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:21.240 19:41:19 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:21.240 19:41:19 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:21.240 19:41:19 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:21.240 19:41:19 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:21.240 19:41:19 -- nvmf/common.sh@627 -- # local block nvme 00:32:21.240 19:41:19 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:21.240 19:41:19 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:21.499 19:41:19 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:21.499 19:41:19 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:22.436 Waiting for block devices as requested 00:32:22.436 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:22.694 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:22.694 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:22.953 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:22.953 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:22.953 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:22.953 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:22.953 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:23.211 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:23.211 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:23.211 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:23.211 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:23.471 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:23.471 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:23.471 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:23.471 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:23.731 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:23.731 19:41:21 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:23.731 19:41:21 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:23.731 19:41:21 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:23.731 19:41:21 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:23.731 19:41:21 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:23.731 No valid GPT data, bailing 00:32:23.731 19:41:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:23.731 19:41:21 -- scripts/common.sh@393 -- # pt= 00:32:23.731 19:41:21 -- scripts/common.sh@394 -- # return 1 00:32:23.731 19:41:21 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:23.731 19:41:21 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:23.731 19:41:21 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:23.731 19:41:21 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:23.731 19:41:21 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:23.731 19:41:21 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:23.731 19:41:21 -- nvmf/common.sh@654 -- # echo 1 00:32:23.731 19:41:21 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:23.731 19:41:21 -- nvmf/common.sh@656 -- # echo 1 00:32:23.731 19:41:21 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:23.731 19:41:21 -- nvmf/common.sh@663 -- # echo tcp 00:32:23.731 19:41:21 -- nvmf/common.sh@664 -- # echo 4420 00:32:23.731 19:41:21 -- nvmf/common.sh@665 -- # echo ipv4 00:32:23.731 19:41:21 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:23.990 19:41:22 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:23.990 00:32:23.990 Discovery Log Number of Records 2, Generation counter 2 00:32:23.990 =====Discovery Log Entry 0====== 00:32:23.990 trtype: tcp 00:32:23.990 adrfam: ipv4 00:32:23.990 subtype: current discovery subsystem 00:32:23.990 treq: not specified, sq flow control disable supported 00:32:23.990 portid: 1 00:32:23.990 trsvcid: 4420 00:32:23.990 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:23.990 traddr: 10.0.0.1 00:32:23.990 eflags: none 00:32:23.990 sectype: none 00:32:23.990 =====Discovery Log Entry 1====== 00:32:23.990 trtype: tcp 00:32:23.990 adrfam: ipv4 00:32:23.990 subtype: nvme subsystem 00:32:23.990 treq: not specified, sq flow control disable supported 00:32:23.990 portid: 1 00:32:23.990 trsvcid: 4420 00:32:23.990 subnqn: kernel_target 00:32:23.990 traddr: 10.0.0.1 00:32:23.990 eflags: none 00:32:23.990 sectype: none 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:23.990 19:41:22 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:23.990 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.283 Initializing NVMe Controllers 00:32:27.283 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:27.283 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:27.283 Initialization complete. Launching workers. 00:32:27.283 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 46892, failed: 0 00:32:27.283 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 46892, failed to submit 0 00:32:27.283 success 0, unsuccess 46892, failed 0 00:32:27.283 19:41:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:27.283 19:41:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:27.283 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.575 Initializing NVMe Controllers 00:32:30.575 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:30.575 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:30.575 Initialization complete. Launching workers. 00:32:30.575 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 83041, failed: 0 00:32:30.575 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 20930, failed to submit 62111 00:32:30.575 success 0, unsuccess 20930, failed 0 00:32:30.575 19:41:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:30.575 19:41:28 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:30.575 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.149 Initializing NVMe Controllers 00:32:33.149 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:33.149 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:33.149 Initialization complete. Launching workers. 00:32:33.149 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 80452, failed: 0 00:32:33.149 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 20094, failed to submit 60358 00:32:33.149 success 0, unsuccess 20094, failed 0 00:32:33.149 19:41:31 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:33.149 19:41:31 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:33.149 19:41:31 -- nvmf/common.sh@677 -- # echo 0 00:32:33.408 19:41:31 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:33.409 19:41:31 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:33.409 19:41:31 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:33.409 19:41:31 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:33.409 19:41:31 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:33.409 19:41:31 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:33.409 00:32:33.409 real 0m11.957s 00:32:33.409 user 0m5.957s 00:32:33.409 sys 0m2.359s 00:32:33.409 19:41:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:33.409 19:41:31 -- common/autotest_common.sh@10 -- # set +x 00:32:33.409 ************************************ 00:32:33.409 END TEST kernel_target_abort 00:32:33.409 ************************************ 00:32:33.409 19:41:31 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:33.409 19:41:31 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:33.409 19:41:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:33.409 19:41:31 -- nvmf/common.sh@116 -- # sync 00:32:33.409 19:41:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:33.409 19:41:31 -- nvmf/common.sh@119 -- # set +e 00:32:33.409 19:41:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:33.409 19:41:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:33.409 rmmod nvme_tcp 00:32:33.409 rmmod nvme_fabrics 00:32:33.409 rmmod nvme_keyring 00:32:33.409 19:41:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:33.409 19:41:31 -- nvmf/common.sh@123 -- # set -e 00:32:33.409 19:41:31 -- nvmf/common.sh@124 -- # return 0 00:32:33.409 19:41:31 -- nvmf/common.sh@477 -- # '[' -n 1352746 ']' 00:32:33.409 19:41:31 -- nvmf/common.sh@478 -- # killprocess 1352746 00:32:33.409 19:41:31 -- common/autotest_common.sh@936 -- # '[' -z 1352746 ']' 00:32:33.409 19:41:31 -- common/autotest_common.sh@940 -- # kill -0 1352746 00:32:33.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1352746) - No such process 00:32:33.409 19:41:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1352746 is not found' 00:32:33.409 Process with pid 1352746 is not found 00:32:33.409 19:41:31 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:33.409 19:41:31 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:34.343 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:34.343 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:34.602 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:34.602 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:34.602 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:34.602 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:34.602 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:34.602 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:34.602 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:34.602 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:34.602 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:34.602 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:34.602 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:34.602 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:34.602 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:34.602 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:34.602 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:34.863 19:41:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:34.863 19:41:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:34.863 19:41:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:34.863 19:41:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:34.863 19:41:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.863 19:41:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:34.863 19:41:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.770 19:41:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:36.770 00:32:36.770 real 0m35.005s 00:32:36.770 user 1m4.428s 00:32:36.770 sys 0m8.122s 00:32:36.770 19:41:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:32:36.770 19:41:34 -- common/autotest_common.sh@10 -- # set +x 00:32:36.770 ************************************ 00:32:36.770 END TEST nvmf_abort_qd_sizes 00:32:36.770 ************************************ 00:32:36.770 19:41:34 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:36.770 19:41:34 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:32:36.770 19:41:34 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:32:36.770 19:41:34 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:32:36.770 19:41:34 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:32:36.770 19:41:34 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:32:36.770 19:41:34 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:32:36.770 19:41:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:36.770 19:41:34 -- common/autotest_common.sh@10 -- # set +x 00:32:36.770 19:41:34 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:32:36.770 19:41:34 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:32:36.770 19:41:34 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:32:36.770 19:41:34 -- common/autotest_common.sh@10 -- # set +x 00:32:38.671 INFO: APP EXITING 00:32:38.671 INFO: killing all VMs 00:32:38.671 INFO: killing vhost app 00:32:38.671 INFO: EXIT DONE 00:32:39.607 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:39.607 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:39.607 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:39.607 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:39.607 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:39.607 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:39.607 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:39.607 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:39.607 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:39.607 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:39.607 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:39.866 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:39.866 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:39.866 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:39.866 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:39.866 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:39.866 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:41.242 Cleaning 00:32:41.242 Removing: /var/run/dpdk/spdk0/config 00:32:41.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:41.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:41.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:41.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:41.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:41.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:41.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:41.242 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:41.242 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:41.242 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:41.242 Removing: /var/run/dpdk/spdk1/config 00:32:41.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:41.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:41.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:41.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:41.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:41.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:41.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:41.242 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:41.242 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:41.242 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:41.242 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:41.242 Removing: /var/run/dpdk/spdk2/config 00:32:41.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:41.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:41.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:41.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:41.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:41.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:41.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:41.242 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:41.242 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:41.242 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:41.242 Removing: /var/run/dpdk/spdk3/config 00:32:41.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:41.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:41.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:41.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:41.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:41.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:41.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:41.242 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:41.242 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:41.242 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:41.242 Removing: /var/run/dpdk/spdk4/config 00:32:41.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:41.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:41.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:41.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:41.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:41.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:41.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:41.242 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:41.242 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:41.242 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:41.242 Removing: /dev/shm/bdev_svc_trace.1 00:32:41.242 Removing: /dev/shm/nvmf_trace.0 00:32:41.242 Removing: /dev/shm/spdk_tgt_trace.pid1072703 00:32:41.242 Removing: /var/run/dpdk/spdk0 00:32:41.242 Removing: /var/run/dpdk/spdk1 00:32:41.242 Removing: /var/run/dpdk/spdk2 00:32:41.242 Removing: /var/run/dpdk/spdk3 00:32:41.242 Removing: /var/run/dpdk/spdk4 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1070985 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1071750 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1072703 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1073197 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1074556 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1075506 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1075702 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1076031 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1076372 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1076582 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1076743 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1076900 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1077206 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1077673 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1080242 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1080423 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1080708 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1080848 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1081211 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1081404 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1081737 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1081876 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1082162 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1082607 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1082969 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1083124 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1083502 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1083656 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1083869 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1084163 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1084194 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1084369 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1084516 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1084672 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1084814 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1085094 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1085236 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1085398 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1085541 00:32:41.242 Removing: /var/run/dpdk/spdk_pid1085794 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1085962 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1086119 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1086267 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1086482 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1086682 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1086844 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1086988 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1087172 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1087407 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1087565 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1087715 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1087877 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1088134 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1088292 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1088436 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1088600 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1088859 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1089022 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1089163 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1089320 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1089584 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1089741 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1089888 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1090078 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1090308 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1090472 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1090618 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1090856 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1091043 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1091199 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1091348 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1091592 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1091705 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1091916 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1094138 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1152135 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1154797 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1161987 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1165408 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1167793 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1168319 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1172341 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1172345 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1173251 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1174079 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1174757 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1175171 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1175173 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1175435 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1175455 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1175582 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1176139 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1176810 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1177486 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1177902 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1177908 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1178054 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1179243 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1179981 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1185599 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1185885 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1188573 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1192349 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1194454 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1201218 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1207245 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1208530 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1209258 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1219783 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1222032 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1224928 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1226081 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1227568 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1227726 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1227945 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1228152 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1228742 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1230117 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1231000 00:32:41.500 Removing: /var/run/dpdk/spdk_pid1231450 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1235074 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1238667 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1242810 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1266713 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1270059 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1274032 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1275126 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1276248 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1278968 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1281375 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1285896 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1285906 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1288852 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1288989 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1289164 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1289526 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1289546 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1290654 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1291876 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1293095 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1294306 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1295522 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1296741 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1301368 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1301708 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1303054 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1303903 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1307701 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1309747 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1313367 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1317118 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1320787 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1321208 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1321629 00:32:41.501 Removing: /var/run/dpdk/spdk_pid1322049 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1322640 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1323195 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1323743 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1324248 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1326839 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1326991 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1330963 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1331212 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1333419 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1338590 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1338596 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1341673 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1343058 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1344552 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1345318 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1346887 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1347666 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1353184 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1353588 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1353994 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1355469 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1355881 00:32:41.759 Removing: /var/run/dpdk/spdk_pid1356294 00:32:41.759 Clean 00:32:41.759 killing process with pid 1042878 00:32:49.880 killing process with pid 1042875 00:32:49.880 killing process with pid 1042877 00:32:50.140 killing process with pid 1042876 00:32:50.140 19:41:48 -- common/autotest_common.sh@1446 -- # return 0 00:32:50.140 19:41:48 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:32:50.140 19:41:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:50.140 19:41:48 -- common/autotest_common.sh@10 -- # set +x 00:32:50.140 19:41:48 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:32:50.140 19:41:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:50.140 19:41:48 -- common/autotest_common.sh@10 -- # set +x 00:32:50.140 19:41:48 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:50.140 19:41:48 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:50.140 19:41:48 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:50.140 19:41:48 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:32:50.140 19:41:48 -- spdk/autotest.sh@383 -- # hostname 00:32:50.140 19:41:48 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:50.400 geninfo: WARNING: invalid characters removed from testname! 00:33:22.514 19:42:16 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.514 19:42:20 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:25.057 19:42:23 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:27.599 19:42:25 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:30.897 19:42:28 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:33.438 19:42:31 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:35.989 19:42:33 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:35.989 19:42:33 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:33:35.989 19:42:33 -- common/autotest_common.sh@1690 -- $ lcov --version 00:33:35.989 19:42:33 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:33:35.989 19:42:34 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:33:35.989 19:42:34 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:33:35.989 19:42:34 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:33:35.989 19:42:34 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:33:35.989 19:42:34 -- scripts/common.sh@335 -- $ IFS=.-: 00:33:35.989 19:42:34 -- scripts/common.sh@335 -- $ read -ra ver1 00:33:35.989 19:42:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:35.989 19:42:34 -- scripts/common.sh@336 -- $ read -ra ver2 00:33:35.989 19:42:34 -- scripts/common.sh@337 -- $ local 'op=<' 00:33:35.989 19:42:34 -- scripts/common.sh@339 -- $ ver1_l=2 00:33:35.989 19:42:34 -- scripts/common.sh@340 -- $ ver2_l=1 00:33:35.989 19:42:34 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:33:35.989 19:42:34 -- scripts/common.sh@343 -- $ case "$op" in 00:33:35.989 19:42:34 -- scripts/common.sh@344 -- $ : 1 00:33:35.989 19:42:34 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:33:35.989 19:42:34 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:35.989 19:42:34 -- scripts/common.sh@364 -- $ decimal 1 00:33:35.989 19:42:34 -- scripts/common.sh@352 -- $ local d=1 00:33:35.989 19:42:34 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:35.989 19:42:34 -- scripts/common.sh@354 -- $ echo 1 00:33:35.989 19:42:34 -- scripts/common.sh@364 -- $ ver1[v]=1 00:33:35.989 19:42:34 -- scripts/common.sh@365 -- $ decimal 2 00:33:35.989 19:42:34 -- scripts/common.sh@352 -- $ local d=2 00:33:35.989 19:42:34 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:35.989 19:42:34 -- scripts/common.sh@354 -- $ echo 2 00:33:35.989 19:42:34 -- scripts/common.sh@365 -- $ ver2[v]=2 00:33:35.989 19:42:34 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:33:35.989 19:42:34 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:33:35.989 19:42:34 -- scripts/common.sh@367 -- $ return 0 00:33:35.989 19:42:34 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:35.989 19:42:34 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:33:35.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.989 --rc genhtml_branch_coverage=1 00:33:35.989 --rc genhtml_function_coverage=1 00:33:35.989 --rc genhtml_legend=1 00:33:35.989 --rc geninfo_all_blocks=1 00:33:35.989 --rc geninfo_unexecuted_blocks=1 00:33:35.989 00:33:35.989 ' 00:33:35.989 19:42:34 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:33:35.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.989 --rc genhtml_branch_coverage=1 00:33:35.989 --rc genhtml_function_coverage=1 00:33:35.989 --rc genhtml_legend=1 00:33:35.989 --rc geninfo_all_blocks=1 00:33:35.989 --rc geninfo_unexecuted_blocks=1 00:33:35.989 00:33:35.989 ' 00:33:35.989 19:42:34 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:33:35.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.989 --rc genhtml_branch_coverage=1 00:33:35.989 --rc genhtml_function_coverage=1 00:33:35.989 --rc genhtml_legend=1 00:33:35.989 --rc geninfo_all_blocks=1 00:33:35.989 --rc geninfo_unexecuted_blocks=1 00:33:35.989 00:33:35.989 ' 00:33:35.989 19:42:34 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:33:35.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.989 --rc genhtml_branch_coverage=1 00:33:35.989 --rc genhtml_function_coverage=1 00:33:35.989 --rc genhtml_legend=1 00:33:35.989 --rc geninfo_all_blocks=1 00:33:35.989 --rc geninfo_unexecuted_blocks=1 00:33:35.989 00:33:35.989 ' 00:33:35.989 19:42:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.989 19:42:34 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:35.989 19:42:34 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.989 19:42:34 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.989 19:42:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.989 19:42:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.989 19:42:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.989 19:42:34 -- paths/export.sh@5 -- $ export PATH 00:33:35.989 19:42:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.989 19:42:34 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:35.989 19:42:34 -- common/autobuild_common.sh@440 -- $ date +%s 00:33:35.989 19:42:34 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731868954.XXXXXX 00:33:35.989 19:42:34 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731868954.FdPf2O 00:33:35.989 19:42:34 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:33:35.989 19:42:34 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:33:35.989 19:42:34 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:33:35.989 19:42:34 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:33:35.989 19:42:34 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:35.989 19:42:34 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:35.989 19:42:34 -- common/autobuild_common.sh@456 -- $ get_config_params 00:33:35.990 19:42:34 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:33:35.990 19:42:34 -- common/autotest_common.sh@10 -- $ set +x 00:33:35.990 19:42:34 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:33:35.990 19:42:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:35.990 19:42:34 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:35.990 19:42:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:35.990 19:42:34 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:35.990 19:42:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:35.990 19:42:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:35.990 19:42:34 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:35.990 19:42:34 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:35.990 19:42:34 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:35.990 19:42:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:35.990 + [[ -n 988088 ]] 00:33:35.990 + sudo kill 988088 00:33:36.001 [Pipeline] } 00:33:36.019 [Pipeline] // stage 00:33:36.024 [Pipeline] } 00:33:36.039 [Pipeline] // timeout 00:33:36.044 [Pipeline] } 00:33:36.058 [Pipeline] // catchError 00:33:36.063 [Pipeline] } 00:33:36.078 [Pipeline] // wrap 00:33:36.084 [Pipeline] } 00:33:36.097 [Pipeline] // catchError 00:33:36.106 [Pipeline] stage 00:33:36.108 [Pipeline] { (Epilogue) 00:33:36.121 [Pipeline] catchError 00:33:36.123 [Pipeline] { 00:33:36.136 [Pipeline] echo 00:33:36.138 Cleanup processes 00:33:36.144 [Pipeline] sh 00:33:36.432 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:36.432 1369218 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:36.448 [Pipeline] sh 00:33:36.765 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:36.765 ++ grep -v 'sudo pgrep' 00:33:36.765 ++ awk '{print $1}' 00:33:36.765 + sudo kill -9 00:33:36.765 + true 00:33:36.807 [Pipeline] sh 00:33:37.137 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:47.142 [Pipeline] sh 00:33:47.431 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:47.431 Artifacts sizes are good 00:33:47.445 [Pipeline] archiveArtifacts 00:33:47.452 Archiving artifacts 00:33:47.625 [Pipeline] sh 00:33:47.908 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:47.923 [Pipeline] cleanWs 00:33:47.933 [WS-CLEANUP] Deleting project workspace... 00:33:47.933 [WS-CLEANUP] Deferred wipeout is used... 00:33:47.940 [WS-CLEANUP] done 00:33:47.942 [Pipeline] } 00:33:47.958 [Pipeline] // catchError 00:33:47.969 [Pipeline] sh 00:33:48.250 + logger -p user.info -t JENKINS-CI 00:33:48.258 [Pipeline] } 00:33:48.272 [Pipeline] // stage 00:33:48.276 [Pipeline] } 00:33:48.289 [Pipeline] // node 00:33:48.294 [Pipeline] End of Pipeline 00:33:48.336 Finished: SUCCESS